Test Report: KVM_Linux_crio 19736

                    
                      c03ccee26a80b9ecde7f622e8f7f7412408a7b8a:2024-10-01:36456
                    
                

Test fail (33/311)

Order failed test Duration
34 TestAddons/parallel/Ingress 156.09
36 TestAddons/parallel/MetricsServer 315.42
44 TestAddons/StoppedEnableDisable 154.24
96 TestFunctional/parallel/PersistentVolumeClaim 190.83
163 TestMultiControlPlane/serial/StopSecondaryNode 141.43
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.58
165 TestMultiControlPlane/serial/RestartSecondaryNode 6.65
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.28
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 374.6
170 TestMultiControlPlane/serial/StopCluster 141.71
230 TestMultiNode/serial/RestartKeepsNodes 328.63
232 TestMultiNode/serial/StopMultiNode 144.68
239 TestPreload 179.21
247 TestKubernetesUpgrade 403.77
261 TestPause/serial/SecondStartNoReconfiguration 59.51
283 TestStartStop/group/old-k8s-version/serial/FirstStart 300.51
291 TestStartStop/group/no-preload/serial/Stop 139.18
294 TestStartStop/group/embed-certs/serial/Stop 139.04
295 TestStartStop/group/old-k8s-version/serial/DeployApp 0.55
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 102.68
297 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
299 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
305 TestStartStop/group/old-k8s-version/serial/SecondStart 755.91
308 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.05
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
311 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 542.58
312 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.5
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 541.81
314 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.45
315 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 440.75
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 370.26
317 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 93.94
345 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 138.11
x
+
TestAddons/parallel/Ingress (156.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-800266 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-800266 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-800266 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [43c83fb0-f623-43ea-bc3c-91da7206fa2c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [43c83fb0-f623-43ea-bc3c-91da7206fa2c] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.00483092s
I1001 19:05:57.113428   18430 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-800266 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-800266 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.523912745s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-800266 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-800266 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.56
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-800266 -n addons-800266
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-800266 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-800266 logs -n 25: (1.225635996s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 01 Oct 24 18:54 UTC | 01 Oct 24 18:54 UTC |
	| delete  | -p download-only-195954                                                                     | download-only-195954 | jenkins | v1.34.0 | 01 Oct 24 18:54 UTC | 01 Oct 24 18:54 UTC |
	| delete  | -p download-only-333407                                                                     | download-only-333407 | jenkins | v1.34.0 | 01 Oct 24 18:54 UTC | 01 Oct 24 18:54 UTC |
	| delete  | -p download-only-195954                                                                     | download-only-195954 | jenkins | v1.34.0 | 01 Oct 24 18:54 UTC | 01 Oct 24 18:54 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-213993 | jenkins | v1.34.0 | 01 Oct 24 18:54 UTC |                     |
	|         | binary-mirror-213993                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46019                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-213993                                                                     | binary-mirror-213993 | jenkins | v1.34.0 | 01 Oct 24 18:54 UTC | 01 Oct 24 18:54 UTC |
	| addons  | enable dashboard -p                                                                         | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 18:54 UTC |                     |
	|         | addons-800266                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 18:54 UTC |                     |
	|         | addons-800266                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-800266 --wait=true                                                                | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 18:54 UTC | 01 Oct 24 18:56 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-800266 addons disable                                                                | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 18:56 UTC | 01 Oct 24 18:56 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-800266 addons disable                                                                | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:05 UTC | 01 Oct 24 19:05 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:05 UTC | 01 Oct 24 19:05 UTC |
	|         | -p addons-800266                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:05 UTC | 01 Oct 24 19:05 UTC |
	|         | -p addons-800266                                                                            |                      |         |         |                     |                     |
	| addons  | addons-800266 addons disable                                                                | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:05 UTC | 01 Oct 24 19:05 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-800266 ip                                                                            | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:05 UTC | 01 Oct 24 19:05 UTC |
	| addons  | addons-800266 addons disable                                                                | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:05 UTC | 01 Oct 24 19:05 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-800266 ssh cat                                                                       | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:05 UTC | 01 Oct 24 19:05 UTC |
	|         | /opt/local-path-provisioner/pvc-8cdb206c-3008-4806-8f7b-043e61fbf684_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-800266 addons disable                                                                | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:05 UTC | 01 Oct 24 19:06 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-800266 addons                                                                        | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:05 UTC | 01 Oct 24 19:05 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-800266 addons                                                                        | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:05 UTC | 01 Oct 24 19:05 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-800266 ssh curl -s                                                                   | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:05 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-800266 addons                                                                        | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:06 UTC | 01 Oct 24 19:06 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-800266 addons                                                                        | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:06 UTC | 01 Oct 24 19:06 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-800266 addons disable                                                                | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:06 UTC | 01 Oct 24 19:06 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-800266 ip                                                                            | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:08 UTC | 01 Oct 24 19:08 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 18:54:45
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 18:54:45.498233   19130 out.go:345] Setting OutFile to fd 1 ...
	I1001 18:54:45.498361   19130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 18:54:45.498373   19130 out.go:358] Setting ErrFile to fd 2...
	I1001 18:54:45.498380   19130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 18:54:45.498595   19130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 18:54:45.499195   19130 out.go:352] Setting JSON to false
	I1001 18:54:45.499987   19130 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2227,"bootTime":1727806658,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 18:54:45.500077   19130 start.go:139] virtualization: kvm guest
	I1001 18:54:45.501925   19130 out.go:177] * [addons-800266] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 18:54:45.503081   19130 notify.go:220] Checking for updates...
	I1001 18:54:45.503103   19130 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 18:54:45.504220   19130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 18:54:45.505318   19130 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 18:54:45.506383   19130 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 18:54:45.507427   19130 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 18:54:45.508563   19130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 18:54:45.509781   19130 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 18:54:45.542204   19130 out.go:177] * Using the kvm2 driver based on user configuration
	I1001 18:54:45.543033   19130 start.go:297] selected driver: kvm2
	I1001 18:54:45.543048   19130 start.go:901] validating driver "kvm2" against <nil>
	I1001 18:54:45.543059   19130 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 18:54:45.543726   19130 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 18:54:45.543817   19130 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 18:54:45.559273   19130 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 18:54:45.559325   19130 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 18:54:45.559575   19130 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 18:54:45.559604   19130 cni.go:84] Creating CNI manager for ""
	I1001 18:54:45.559640   19130 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 18:54:45.559650   19130 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 18:54:45.559699   19130 start.go:340] cluster config:
	{Name:addons-800266 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-800266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:54:45.559789   19130 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 18:54:45.561303   19130 out.go:177] * Starting "addons-800266" primary control-plane node in "addons-800266" cluster
	I1001 18:54:45.562260   19130 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 18:54:45.562302   19130 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 18:54:45.562315   19130 cache.go:56] Caching tarball of preloaded images
	I1001 18:54:45.562412   19130 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 18:54:45.562426   19130 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 18:54:45.562844   19130 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/config.json ...
	I1001 18:54:45.562870   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/config.json: {Name:mk42ad5268c0ee1c54e04bf3050a8a4716c0fd89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:54:45.563052   19130 start.go:360] acquireMachinesLock for addons-800266: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 18:54:45.563152   19130 start.go:364] duration metric: took 80.6µs to acquireMachinesLock for "addons-800266"
	I1001 18:54:45.563177   19130 start.go:93] Provisioning new machine with config: &{Name:addons-800266 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-800266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 18:54:45.563265   19130 start.go:125] createHost starting for "" (driver="kvm2")
	I1001 18:54:45.564909   19130 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1001 18:54:45.565053   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:54:45.565097   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:54:45.580192   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35307
	I1001 18:54:45.580732   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:54:45.581364   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:54:45.581392   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:54:45.581758   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:54:45.581964   19130 main.go:141] libmachine: (addons-800266) Calling .GetMachineName
	I1001 18:54:45.582155   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:54:45.582322   19130 start.go:159] libmachine.API.Create for "addons-800266" (driver="kvm2")
	I1001 18:54:45.582356   19130 client.go:168] LocalClient.Create starting
	I1001 18:54:45.582394   19130 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem
	I1001 18:54:45.662095   19130 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem
	I1001 18:54:45.765180   19130 main.go:141] libmachine: Running pre-create checks...
	I1001 18:54:45.765204   19130 main.go:141] libmachine: (addons-800266) Calling .PreCreateCheck
	I1001 18:54:45.765686   19130 main.go:141] libmachine: (addons-800266) Calling .GetConfigRaw
	I1001 18:54:45.766122   19130 main.go:141] libmachine: Creating machine...
	I1001 18:54:45.766137   19130 main.go:141] libmachine: (addons-800266) Calling .Create
	I1001 18:54:45.766335   19130 main.go:141] libmachine: (addons-800266) Creating KVM machine...
	I1001 18:54:45.767606   19130 main.go:141] libmachine: (addons-800266) DBG | found existing default KVM network
	I1001 18:54:45.768408   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:45.768211   19152 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I1001 18:54:45.768501   19130 main.go:141] libmachine: (addons-800266) DBG | created network xml: 
	I1001 18:54:45.768525   19130 main.go:141] libmachine: (addons-800266) DBG | <network>
	I1001 18:54:45.768533   19130 main.go:141] libmachine: (addons-800266) DBG |   <name>mk-addons-800266</name>
	I1001 18:54:45.768540   19130 main.go:141] libmachine: (addons-800266) DBG |   <dns enable='no'/>
	I1001 18:54:45.768547   19130 main.go:141] libmachine: (addons-800266) DBG |   
	I1001 18:54:45.768556   19130 main.go:141] libmachine: (addons-800266) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1001 18:54:45.768565   19130 main.go:141] libmachine: (addons-800266) DBG |     <dhcp>
	I1001 18:54:45.768574   19130 main.go:141] libmachine: (addons-800266) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1001 18:54:45.768586   19130 main.go:141] libmachine: (addons-800266) DBG |     </dhcp>
	I1001 18:54:45.768594   19130 main.go:141] libmachine: (addons-800266) DBG |   </ip>
	I1001 18:54:45.768601   19130 main.go:141] libmachine: (addons-800266) DBG |   
	I1001 18:54:45.768610   19130 main.go:141] libmachine: (addons-800266) DBG | </network>
	I1001 18:54:45.768640   19130 main.go:141] libmachine: (addons-800266) DBG | 
	I1001 18:54:45.773904   19130 main.go:141] libmachine: (addons-800266) DBG | trying to create private KVM network mk-addons-800266 192.168.39.0/24...
	I1001 18:54:45.841936   19130 main.go:141] libmachine: (addons-800266) DBG | private KVM network mk-addons-800266 192.168.39.0/24 created
	I1001 18:54:45.841967   19130 main.go:141] libmachine: (addons-800266) Setting up store path in /home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266 ...
	I1001 18:54:45.841985   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:45.841901   19152 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 18:54:45.842003   19130 main.go:141] libmachine: (addons-800266) Building disk image from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 18:54:45.842040   19130 main.go:141] libmachine: (addons-800266) Downloading /home/jenkins/minikube-integration/19736-11198/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 18:54:46.116666   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:46.116527   19152 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa...
	I1001 18:54:46.227591   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:46.227418   19152 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/addons-800266.rawdisk...
	I1001 18:54:46.227635   19130 main.go:141] libmachine: (addons-800266) DBG | Writing magic tar header
	I1001 18:54:46.227646   19130 main.go:141] libmachine: (addons-800266) DBG | Writing SSH key tar header
	I1001 18:54:46.227662   19130 main.go:141] libmachine: (addons-800266) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266 (perms=drwx------)
	I1001 18:54:46.227678   19130 main.go:141] libmachine: (addons-800266) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines (perms=drwxr-xr-x)
	I1001 18:54:46.227685   19130 main.go:141] libmachine: (addons-800266) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube (perms=drwxr-xr-x)
	I1001 18:54:46.227695   19130 main.go:141] libmachine: (addons-800266) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198 (perms=drwxrwxr-x)
	I1001 18:54:46.227705   19130 main.go:141] libmachine: (addons-800266) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 18:54:46.227720   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:46.227537   19152 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266 ...
	I1001 18:54:46.227730   19130 main.go:141] libmachine: (addons-800266) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 18:54:46.227755   19130 main.go:141] libmachine: (addons-800266) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266
	I1001 18:54:46.227768   19130 main.go:141] libmachine: (addons-800266) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines
	I1001 18:54:46.227774   19130 main.go:141] libmachine: (addons-800266) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 18:54:46.227785   19130 main.go:141] libmachine: (addons-800266) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198
	I1001 18:54:46.227805   19130 main.go:141] libmachine: (addons-800266) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 18:54:46.227817   19130 main.go:141] libmachine: (addons-800266) Creating domain...
	I1001 18:54:46.227829   19130 main.go:141] libmachine: (addons-800266) DBG | Checking permissions on dir: /home/jenkins
	I1001 18:54:46.227839   19130 main.go:141] libmachine: (addons-800266) DBG | Checking permissions on dir: /home
	I1001 18:54:46.227850   19130 main.go:141] libmachine: (addons-800266) DBG | Skipping /home - not owner
	I1001 18:54:46.228905   19130 main.go:141] libmachine: (addons-800266) define libvirt domain using xml: 
	I1001 18:54:46.228936   19130 main.go:141] libmachine: (addons-800266) <domain type='kvm'>
	I1001 18:54:46.228946   19130 main.go:141] libmachine: (addons-800266)   <name>addons-800266</name>
	I1001 18:54:46.228952   19130 main.go:141] libmachine: (addons-800266)   <memory unit='MiB'>4000</memory>
	I1001 18:54:46.228961   19130 main.go:141] libmachine: (addons-800266)   <vcpu>2</vcpu>
	I1001 18:54:46.228973   19130 main.go:141] libmachine: (addons-800266)   <features>
	I1001 18:54:46.228982   19130 main.go:141] libmachine: (addons-800266)     <acpi/>
	I1001 18:54:46.228989   19130 main.go:141] libmachine: (addons-800266)     <apic/>
	I1001 18:54:46.228998   19130 main.go:141] libmachine: (addons-800266)     <pae/>
	I1001 18:54:46.229004   19130 main.go:141] libmachine: (addons-800266)     
	I1001 18:54:46.229012   19130 main.go:141] libmachine: (addons-800266)   </features>
	I1001 18:54:46.229020   19130 main.go:141] libmachine: (addons-800266)   <cpu mode='host-passthrough'>
	I1001 18:54:46.229028   19130 main.go:141] libmachine: (addons-800266)   
	I1001 18:54:46.229043   19130 main.go:141] libmachine: (addons-800266)   </cpu>
	I1001 18:54:46.229054   19130 main.go:141] libmachine: (addons-800266)   <os>
	I1001 18:54:46.229066   19130 main.go:141] libmachine: (addons-800266)     <type>hvm</type>
	I1001 18:54:46.229077   19130 main.go:141] libmachine: (addons-800266)     <boot dev='cdrom'/>
	I1001 18:54:46.229085   19130 main.go:141] libmachine: (addons-800266)     <boot dev='hd'/>
	I1001 18:54:46.229094   19130 main.go:141] libmachine: (addons-800266)     <bootmenu enable='no'/>
	I1001 18:54:46.229101   19130 main.go:141] libmachine: (addons-800266)   </os>
	I1001 18:54:46.229109   19130 main.go:141] libmachine: (addons-800266)   <devices>
	I1001 18:54:46.229117   19130 main.go:141] libmachine: (addons-800266)     <disk type='file' device='cdrom'>
	I1001 18:54:46.229141   19130 main.go:141] libmachine: (addons-800266)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/boot2docker.iso'/>
	I1001 18:54:46.229156   19130 main.go:141] libmachine: (addons-800266)       <target dev='hdc' bus='scsi'/>
	I1001 18:54:46.229167   19130 main.go:141] libmachine: (addons-800266)       <readonly/>
	I1001 18:54:46.229176   19130 main.go:141] libmachine: (addons-800266)     </disk>
	I1001 18:54:46.229190   19130 main.go:141] libmachine: (addons-800266)     <disk type='file' device='disk'>
	I1001 18:54:46.229203   19130 main.go:141] libmachine: (addons-800266)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 18:54:46.229219   19130 main.go:141] libmachine: (addons-800266)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/addons-800266.rawdisk'/>
	I1001 18:54:46.229232   19130 main.go:141] libmachine: (addons-800266)       <target dev='hda' bus='virtio'/>
	I1001 18:54:46.229244   19130 main.go:141] libmachine: (addons-800266)     </disk>
	I1001 18:54:46.229252   19130 main.go:141] libmachine: (addons-800266)     <interface type='network'>
	I1001 18:54:46.229266   19130 main.go:141] libmachine: (addons-800266)       <source network='mk-addons-800266'/>
	I1001 18:54:46.229283   19130 main.go:141] libmachine: (addons-800266)       <model type='virtio'/>
	I1001 18:54:46.229307   19130 main.go:141] libmachine: (addons-800266)     </interface>
	I1001 18:54:46.229325   19130 main.go:141] libmachine: (addons-800266)     <interface type='network'>
	I1001 18:54:46.229331   19130 main.go:141] libmachine: (addons-800266)       <source network='default'/>
	I1001 18:54:46.229336   19130 main.go:141] libmachine: (addons-800266)       <model type='virtio'/>
	I1001 18:54:46.229344   19130 main.go:141] libmachine: (addons-800266)     </interface>
	I1001 18:54:46.229357   19130 main.go:141] libmachine: (addons-800266)     <serial type='pty'>
	I1001 18:54:46.229364   19130 main.go:141] libmachine: (addons-800266)       <target port='0'/>
	I1001 18:54:46.229368   19130 main.go:141] libmachine: (addons-800266)     </serial>
	I1001 18:54:46.229376   19130 main.go:141] libmachine: (addons-800266)     <console type='pty'>
	I1001 18:54:46.229385   19130 main.go:141] libmachine: (addons-800266)       <target type='serial' port='0'/>
	I1001 18:54:46.229392   19130 main.go:141] libmachine: (addons-800266)     </console>
	I1001 18:54:46.229396   19130 main.go:141] libmachine: (addons-800266)     <rng model='virtio'>
	I1001 18:54:46.229404   19130 main.go:141] libmachine: (addons-800266)       <backend model='random'>/dev/random</backend>
	I1001 18:54:46.229414   19130 main.go:141] libmachine: (addons-800266)     </rng>
	I1001 18:54:46.229450   19130 main.go:141] libmachine: (addons-800266)     
	I1001 18:54:46.229474   19130 main.go:141] libmachine: (addons-800266)     
	I1001 18:54:46.229484   19130 main.go:141] libmachine: (addons-800266)   </devices>
	I1001 18:54:46.229494   19130 main.go:141] libmachine: (addons-800266) </domain>
	I1001 18:54:46.229507   19130 main.go:141] libmachine: (addons-800266) 
	I1001 18:54:46.236399   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:7a:a5:eb in network default
	I1001 18:54:46.236906   19130 main.go:141] libmachine: (addons-800266) Ensuring networks are active...
	I1001 18:54:46.236926   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:46.237570   19130 main.go:141] libmachine: (addons-800266) Ensuring network default is active
	I1001 18:54:46.237872   19130 main.go:141] libmachine: (addons-800266) Ensuring network mk-addons-800266 is active
	I1001 18:54:46.239179   19130 main.go:141] libmachine: (addons-800266) Getting domain xml...
	I1001 18:54:46.239936   19130 main.go:141] libmachine: (addons-800266) Creating domain...
	I1001 18:54:47.656550   19130 main.go:141] libmachine: (addons-800266) Waiting to get IP...
	I1001 18:54:47.657509   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:47.657941   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:47.657994   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:47.657933   19152 retry.go:31] will retry after 287.757922ms: waiting for machine to come up
	I1001 18:54:47.947332   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:47.947608   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:47.947635   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:47.947559   19152 retry.go:31] will retry after 345.990873ms: waiting for machine to come up
	I1001 18:54:48.295045   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:48.295437   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:48.295459   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:48.295404   19152 retry.go:31] will retry after 397.709371ms: waiting for machine to come up
	I1001 18:54:48.696115   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:48.696512   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:48.696534   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:48.696469   19152 retry.go:31] will retry after 508.256405ms: waiting for machine to come up
	I1001 18:54:49.206276   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:49.206780   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:49.206809   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:49.206723   19152 retry.go:31] will retry after 734.08879ms: waiting for machine to come up
	I1001 18:54:49.942495   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:49.942835   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:49.942866   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:49.942803   19152 retry.go:31] will retry after 875.435099ms: waiting for machine to come up
	I1001 18:54:50.819451   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:50.819814   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:50.819847   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:50.819785   19152 retry.go:31] will retry after 955.050707ms: waiting for machine to come up
	I1001 18:54:51.777002   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:51.777479   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:51.777505   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:51.777419   19152 retry.go:31] will retry after 1.444896252s: waiting for machine to come up
	I1001 18:54:53.223789   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:53.224170   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:53.224204   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:53.224102   19152 retry.go:31] will retry after 1.214527673s: waiting for machine to come up
	I1001 18:54:54.440479   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:54.440898   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:54.440924   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:54.440860   19152 retry.go:31] will retry after 1.791674016s: waiting for machine to come up
	I1001 18:54:56.234623   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:56.235230   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:56.235254   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:56.235167   19152 retry.go:31] will retry after 1.939828883s: waiting for machine to come up
	I1001 18:54:58.177363   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:58.177904   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:58.177932   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:58.177862   19152 retry.go:31] will retry after 3.297408742s: waiting for machine to come up
	I1001 18:55:01.477029   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:01.477440   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:55:01.477461   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:55:01.477393   19152 retry.go:31] will retry after 2.96185412s: waiting for machine to come up
	I1001 18:55:04.442661   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:04.443064   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:55:04.443085   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:55:04.443024   19152 retry.go:31] will retry after 4.519636945s: waiting for machine to come up
	I1001 18:55:08.966536   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:08.967003   19130 main.go:141] libmachine: (addons-800266) Found IP for machine: 192.168.39.56
	I1001 18:55:08.967018   19130 main.go:141] libmachine: (addons-800266) Reserving static IP address...
	I1001 18:55:08.967054   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has current primary IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:08.967453   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find host DHCP lease matching {name: "addons-800266", mac: "52:54:00:2e:3f:6d", ip: "192.168.39.56"} in network mk-addons-800266
	I1001 18:55:09.038868   19130 main.go:141] libmachine: (addons-800266) DBG | Getting to WaitForSSH function...
	I1001 18:55:09.038893   19130 main.go:141] libmachine: (addons-800266) Reserved static IP address: 192.168.39.56
	I1001 18:55:09.038906   19130 main.go:141] libmachine: (addons-800266) Waiting for SSH to be available...
	I1001 18:55:09.041494   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.041879   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:09.041907   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.042082   19130 main.go:141] libmachine: (addons-800266) DBG | Using SSH client type: external
	I1001 18:55:09.042111   19130 main.go:141] libmachine: (addons-800266) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa (-rw-------)
	I1001 18:55:09.042140   19130 main.go:141] libmachine: (addons-800266) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.56 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 18:55:09.042159   19130 main.go:141] libmachine: (addons-800266) DBG | About to run SSH command:
	I1001 18:55:09.042172   19130 main.go:141] libmachine: (addons-800266) DBG | exit 0
	I1001 18:55:09.172742   19130 main.go:141] libmachine: (addons-800266) DBG | SSH cmd err, output: <nil>: 
	I1001 18:55:09.173014   19130 main.go:141] libmachine: (addons-800266) KVM machine creation complete!
	I1001 18:55:09.173314   19130 main.go:141] libmachine: (addons-800266) Calling .GetConfigRaw
	I1001 18:55:09.173939   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:09.174135   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:09.174296   19130 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 18:55:09.174312   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:09.175520   19130 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 18:55:09.175543   19130 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 18:55:09.175551   19130 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 18:55:09.175560   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:09.177830   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.178171   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:09.178203   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.178309   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:09.178480   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:09.178647   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:09.178815   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:09.178945   19130 main.go:141] libmachine: Using SSH client type: native
	I1001 18:55:09.179201   19130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1001 18:55:09.179214   19130 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 18:55:09.287665   19130 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 18:55:09.287692   19130 main.go:141] libmachine: Detecting the provisioner...
	I1001 18:55:09.287706   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:09.290528   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.290883   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:09.290900   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.291013   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:09.291188   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:09.291309   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:09.291429   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:09.291541   19130 main.go:141] libmachine: Using SSH client type: native
	I1001 18:55:09.291745   19130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1001 18:55:09.291760   19130 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 18:55:09.396609   19130 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 18:55:09.396671   19130 main.go:141] libmachine: found compatible host: buildroot
	I1001 18:55:09.396678   19130 main.go:141] libmachine: Provisioning with buildroot...
	I1001 18:55:09.396684   19130 main.go:141] libmachine: (addons-800266) Calling .GetMachineName
	I1001 18:55:09.396947   19130 buildroot.go:166] provisioning hostname "addons-800266"
	I1001 18:55:09.396976   19130 main.go:141] libmachine: (addons-800266) Calling .GetMachineName
	I1001 18:55:09.397164   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:09.399516   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.399799   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:09.399827   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.399955   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:09.400153   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:09.400292   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:09.400569   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:09.400771   19130 main.go:141] libmachine: Using SSH client type: native
	I1001 18:55:09.400924   19130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1001 18:55:09.400935   19130 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-800266 && echo "addons-800266" | sudo tee /etc/hostname
	I1001 18:55:09.522797   19130 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-800266
	
	I1001 18:55:09.522825   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:09.525396   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.525782   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:09.525811   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.525942   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:09.526125   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:09.526368   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:09.526579   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:09.526757   19130 main.go:141] libmachine: Using SSH client type: native
	I1001 18:55:09.526928   19130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1001 18:55:09.526953   19130 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-800266' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-800266/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-800266' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 18:55:09.641587   19130 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 18:55:09.641619   19130 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 18:55:09.641687   19130 buildroot.go:174] setting up certificates
	I1001 18:55:09.641707   19130 provision.go:84] configureAuth start
	I1001 18:55:09.641722   19130 main.go:141] libmachine: (addons-800266) Calling .GetMachineName
	I1001 18:55:09.642058   19130 main.go:141] libmachine: (addons-800266) Calling .GetIP
	I1001 18:55:09.644641   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.644929   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:09.644958   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.645092   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:09.647308   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.647727   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:09.647747   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.647921   19130 provision.go:143] copyHostCerts
	I1001 18:55:09.647991   19130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 18:55:09.648126   19130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 18:55:09.648698   19130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 18:55:09.648769   19130 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.addons-800266 san=[127.0.0.1 192.168.39.56 addons-800266 localhost minikube]
	I1001 18:55:09.720055   19130 provision.go:177] copyRemoteCerts
	I1001 18:55:09.720117   19130 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 18:55:09.720139   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:09.722593   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.722878   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:09.722909   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.723021   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:09.723220   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:09.723352   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:09.723486   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:09.806311   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 18:55:09.829955   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 18:55:09.852875   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 18:55:09.876663   19130 provision.go:87] duration metric: took 234.933049ms to configureAuth
	I1001 18:55:09.876697   19130 buildroot.go:189] setting minikube options for container-runtime
	I1001 18:55:09.876880   19130 config.go:182] Loaded profile config "addons-800266": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 18:55:09.876963   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:09.879582   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.879902   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:09.879924   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.880154   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:09.880324   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:09.880504   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:09.880636   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:09.880798   19130 main.go:141] libmachine: Using SSH client type: native
	I1001 18:55:09.880952   19130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1001 18:55:09.880965   19130 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 18:55:10.103689   19130 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 18:55:10.103724   19130 main.go:141] libmachine: Checking connection to Docker...
	I1001 18:55:10.103734   19130 main.go:141] libmachine: (addons-800266) Calling .GetURL
	I1001 18:55:10.104989   19130 main.go:141] libmachine: (addons-800266) DBG | Using libvirt version 6000000
	I1001 18:55:10.107029   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.107465   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:10.107489   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.107665   19130 main.go:141] libmachine: Docker is up and running!
	I1001 18:55:10.107693   19130 main.go:141] libmachine: Reticulating splines...
	I1001 18:55:10.107701   19130 client.go:171] duration metric: took 24.525337699s to LocalClient.Create
	I1001 18:55:10.107724   19130 start.go:167] duration metric: took 24.52540274s to libmachine.API.Create "addons-800266"
	I1001 18:55:10.107742   19130 start.go:293] postStartSetup for "addons-800266" (driver="kvm2")
	I1001 18:55:10.107754   19130 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 18:55:10.107771   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:10.108014   19130 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 18:55:10.108038   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:10.110123   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.110416   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:10.110441   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.110534   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:10.110709   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:10.110838   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:10.110949   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:10.194149   19130 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 18:55:10.198077   19130 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 18:55:10.198110   19130 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 18:55:10.198208   19130 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 18:55:10.198253   19130 start.go:296] duration metric: took 90.498772ms for postStartSetup
	I1001 18:55:10.198290   19130 main.go:141] libmachine: (addons-800266) Calling .GetConfigRaw
	I1001 18:55:10.198844   19130 main.go:141] libmachine: (addons-800266) Calling .GetIP
	I1001 18:55:10.201351   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.201697   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:10.201727   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.201963   19130 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/config.json ...
	I1001 18:55:10.202182   19130 start.go:128] duration metric: took 24.638906267s to createHost
	I1001 18:55:10.202204   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:10.204338   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.204595   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:10.204640   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.204767   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:10.204960   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:10.205107   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:10.205266   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:10.205416   19130 main.go:141] libmachine: Using SSH client type: native
	I1001 18:55:10.205570   19130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1001 18:55:10.205579   19130 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 18:55:10.312750   19130 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727808910.290102030
	
	I1001 18:55:10.312772   19130 fix.go:216] guest clock: 1727808910.290102030
	I1001 18:55:10.312781   19130 fix.go:229] Guest: 2024-10-01 18:55:10.29010203 +0000 UTC Remote: 2024-10-01 18:55:10.202195194 +0000 UTC m=+24.739487507 (delta=87.906836ms)
	I1001 18:55:10.312825   19130 fix.go:200] guest clock delta is within tolerance: 87.906836ms
	I1001 18:55:10.312832   19130 start.go:83] releasing machines lock for "addons-800266", held for 24.749666187s
	I1001 18:55:10.312860   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:10.313125   19130 main.go:141] libmachine: (addons-800266) Calling .GetIP
	I1001 18:55:10.315583   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.315963   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:10.315991   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.316175   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:10.316658   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:10.316826   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:10.316933   19130 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 18:55:10.316988   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:10.317004   19130 ssh_runner.go:195] Run: cat /version.json
	I1001 18:55:10.317025   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:10.319273   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.319595   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:10.319620   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.319748   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.319778   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:10.319949   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:10.320097   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:10.320129   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:10.320151   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.320249   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:10.320346   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:10.320495   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:10.320656   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:10.320776   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:10.444454   19130 ssh_runner.go:195] Run: systemctl --version
	I1001 18:55:10.450206   19130 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 18:55:10.611244   19130 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 18:55:10.617482   19130 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 18:55:10.617543   19130 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 18:55:10.632957   19130 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 18:55:10.632980   19130 start.go:495] detecting cgroup driver to use...
	I1001 18:55:10.633036   19130 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 18:55:10.650705   19130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 18:55:10.666584   19130 docker.go:217] disabling cri-docker service (if available) ...
	I1001 18:55:10.666640   19130 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 18:55:10.683310   19130 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 18:55:10.699876   19130 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 18:55:10.825746   19130 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 18:55:10.980084   19130 docker.go:233] disabling docker service ...
	I1001 18:55:10.980158   19130 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 18:55:10.993523   19130 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 18:55:11.005606   19130 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 18:55:11.119488   19130 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 18:55:11.244662   19130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 18:55:11.257856   19130 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 18:55:11.275482   19130 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 18:55:11.275558   19130 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:55:11.285459   19130 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 18:55:11.285525   19130 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:55:11.295431   19130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:55:11.304948   19130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:55:11.314765   19130 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 18:55:11.324726   19130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:55:11.334767   19130 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:55:11.350940   19130 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:55:11.361015   19130 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 18:55:11.370035   19130 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 18:55:11.370089   19130 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 18:55:11.381742   19130 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 18:55:11.390795   19130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:55:11.515388   19130 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 18:55:11.603860   19130 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 18:55:11.603936   19130 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 18:55:11.608260   19130 start.go:563] Will wait 60s for crictl version
	I1001 18:55:11.608338   19130 ssh_runner.go:195] Run: which crictl
	I1001 18:55:11.611830   19130 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 18:55:11.653312   19130 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 18:55:11.653438   19130 ssh_runner.go:195] Run: crio --version
	I1001 18:55:11.681133   19130 ssh_runner.go:195] Run: crio --version
	I1001 18:55:11.712735   19130 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 18:55:11.713844   19130 main.go:141] libmachine: (addons-800266) Calling .GetIP
	I1001 18:55:11.716408   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:11.716730   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:11.716773   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:11.716941   19130 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 18:55:11.720927   19130 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 18:55:11.732425   19130 kubeadm.go:883] updating cluster {Name:addons-800266 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-800266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 18:55:11.732541   19130 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 18:55:11.732598   19130 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 18:55:11.762110   19130 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1001 18:55:11.762190   19130 ssh_runner.go:195] Run: which lz4
	I1001 18:55:11.765905   19130 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 18:55:11.769536   19130 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 18:55:11.769563   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1001 18:55:13.007062   19130 crio.go:462] duration metric: took 1.241197445s to copy over tarball
	I1001 18:55:13.007129   19130 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 18:55:15.197941   19130 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.190786279s)
	I1001 18:55:15.197975   19130 crio.go:469] duration metric: took 2.190886906s to extract the tarball
	I1001 18:55:15.197990   19130 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 18:55:15.234522   19130 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 18:55:15.277654   19130 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 18:55:15.277676   19130 cache_images.go:84] Images are preloaded, skipping loading
	I1001 18:55:15.277685   19130 kubeadm.go:934] updating node { 192.168.39.56 8443 v1.31.1 crio true true} ...
	I1001 18:55:15.277783   19130 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-800266 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-800266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 18:55:15.277848   19130 ssh_runner.go:195] Run: crio config
	I1001 18:55:15.324427   19130 cni.go:84] Creating CNI manager for ""
	I1001 18:55:15.324453   19130 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 18:55:15.324463   19130 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 18:55:15.324487   19130 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.56 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-800266 NodeName:addons-800266 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 18:55:15.324600   19130 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-800266"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.56
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.56"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 18:55:15.324654   19130 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 18:55:15.334181   19130 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 18:55:15.334244   19130 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 18:55:15.343195   19130 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1001 18:55:15.359252   19130 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 18:55:15.375182   19130 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I1001 18:55:15.392316   19130 ssh_runner.go:195] Run: grep 192.168.39.56	control-plane.minikube.internal$ /etc/hosts
	I1001 18:55:15.396057   19130 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 18:55:15.407370   19130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:55:15.534602   19130 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 18:55:15.552660   19130 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266 for IP: 192.168.39.56
	I1001 18:55:15.552692   19130 certs.go:194] generating shared ca certs ...
	I1001 18:55:15.552741   19130 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:15.552942   19130 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 18:55:15.623145   19130 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt ...
	I1001 18:55:15.623182   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt: {Name:mk05f953b4d77efd685e5c62d9dd4bde7959afb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:15.623355   19130 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key ...
	I1001 18:55:15.623366   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key: {Name:mka07ee01d58eddda5541c1019a73eefd54f1248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:15.623435   19130 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 18:55:15.869439   19130 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt ...
	I1001 18:55:15.869470   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt: {Name:mkbbeef0220b26662e60cc1bef4abf6707c29b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:15.869629   19130 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key ...
	I1001 18:55:15.869639   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key: {Name:mk9ed39639120dff6cf2537c93b22962f508fe4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:15.869714   19130 certs.go:256] generating profile certs ...
	I1001 18:55:15.869769   19130 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.key
	I1001 18:55:15.869790   19130 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt with IP's: []
	I1001 18:55:15.988965   19130 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt ...
	I1001 18:55:15.988993   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: {Name:mkcf5eaaec8c159e822bb977d77d86a7c8478423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:15.989155   19130 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.key ...
	I1001 18:55:15.989165   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.key: {Name:mk945838746efd1efe9fce55c262a25f2ad1fbd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:15.989232   19130 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.key.1f2c5f3f
	I1001 18:55:15.989258   19130 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.crt.1f2c5f3f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.56]
	I1001 18:55:16.410599   19130 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.crt.1f2c5f3f ...
	I1001 18:55:16.410634   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.crt.1f2c5f3f: {Name:mka317f1778f485e5e05792a9b3437352b18d724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:16.410826   19130 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.key.1f2c5f3f ...
	I1001 18:55:16.410842   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.key.1f2c5f3f: {Name:mk37ba30ffedca60d53f12cc36572f0ae020fe2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:16.410942   19130 certs.go:381] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.crt.1f2c5f3f -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.crt
	I1001 18:55:16.411022   19130 certs.go:385] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.key.1f2c5f3f -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.key
	I1001 18:55:16.411073   19130 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/proxy-client.key
	I1001 18:55:16.411091   19130 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/proxy-client.crt with IP's: []
	I1001 18:55:16.561425   19130 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/proxy-client.crt ...
	I1001 18:55:16.561457   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/proxy-client.crt: {Name:mkd4f7b51135c43a924e8e8c10071c6230b456b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:16.561631   19130 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/proxy-client.key ...
	I1001 18:55:16.561643   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/proxy-client.key: {Name:mka9b62c10d8df5e10df597d3e62631abaab9c37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:16.561830   19130 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 18:55:16.561869   19130 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 18:55:16.561897   19130 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 18:55:16.561925   19130 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 18:55:16.562550   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 18:55:16.588782   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 18:55:16.610915   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 18:55:16.633611   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 18:55:16.655948   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1001 18:55:16.678605   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 18:55:16.702145   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 18:55:16.726482   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 18:55:16.749789   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 18:55:16.771866   19130 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 18:55:16.787728   19130 ssh_runner.go:195] Run: openssl version
	I1001 18:55:16.793267   19130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 18:55:16.803175   19130 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:55:16.807272   19130 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:55:16.807334   19130 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:55:16.812930   19130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 18:55:16.822916   19130 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 18:55:16.826703   19130 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 18:55:16.826753   19130 kubeadm.go:392] StartCluster: {Name:addons-800266 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-800266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:55:16.826818   19130 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 18:55:16.826861   19130 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 18:55:16.860195   19130 cri.go:89] found id: ""
	I1001 18:55:16.860255   19130 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 18:55:16.869536   19130 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 18:55:16.878885   19130 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 18:55:16.887691   19130 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 18:55:16.887712   19130 kubeadm.go:157] found existing configuration files:
	
	I1001 18:55:16.887753   19130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 18:55:16.896852   19130 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 18:55:16.896906   19130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 18:55:16.905647   19130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 18:55:16.913783   19130 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 18:55:16.913831   19130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 18:55:16.922270   19130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 18:55:16.930603   19130 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 18:55:16.930654   19130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 18:55:16.939360   19130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 18:55:16.947727   19130 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 18:55:16.947797   19130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 18:55:16.956711   19130 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 18:55:17.004616   19130 kubeadm.go:310] W1001 18:55:16.988682     815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 18:55:17.005383   19130 kubeadm.go:310] W1001 18:55:16.989555     815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 18:55:17.109195   19130 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 18:55:26.823415   19130 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 18:55:26.823495   19130 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 18:55:26.823576   19130 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 18:55:26.823703   19130 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 18:55:26.823826   19130 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 18:55:26.823914   19130 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 18:55:26.825369   19130 out.go:235]   - Generating certificates and keys ...
	I1001 18:55:26.825456   19130 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 18:55:26.825543   19130 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 18:55:26.825634   19130 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 18:55:26.825712   19130 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 18:55:26.825799   19130 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 18:55:26.825889   19130 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 18:55:26.825980   19130 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 18:55:26.826141   19130 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-800266 localhost] and IPs [192.168.39.56 127.0.0.1 ::1]
	I1001 18:55:26.826227   19130 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 18:55:26.826368   19130 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-800266 localhost] and IPs [192.168.39.56 127.0.0.1 ::1]
	I1001 18:55:26.826465   19130 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 18:55:26.826557   19130 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 18:55:26.826627   19130 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 18:55:26.826718   19130 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 18:55:26.826791   19130 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 18:55:26.826874   19130 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 18:55:26.826949   19130 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 18:55:26.827038   19130 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 18:55:26.827109   19130 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 18:55:26.827220   19130 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 18:55:26.827316   19130 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 18:55:26.829589   19130 out.go:235]   - Booting up control plane ...
	I1001 18:55:26.829678   19130 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 18:55:26.829741   19130 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 18:55:26.829804   19130 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 18:55:26.829928   19130 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 18:55:26.830071   19130 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 18:55:26.830118   19130 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 18:55:26.830240   19130 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 18:55:26.830337   19130 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 18:55:26.830388   19130 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.006351501s
	I1001 18:55:26.830452   19130 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 18:55:26.830524   19130 kubeadm.go:310] [api-check] The API server is healthy after 4.50235606s
	I1001 18:55:26.830614   19130 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 18:55:26.830744   19130 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 18:55:26.830801   19130 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 18:55:26.830949   19130 kubeadm.go:310] [mark-control-plane] Marking the node addons-800266 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 18:55:26.830997   19130 kubeadm.go:310] [bootstrap-token] Using token: szuwwh.2qeffcf97dxqsrg4
	I1001 18:55:26.832123   19130 out.go:235]   - Configuring RBAC rules ...
	I1001 18:55:26.832217   19130 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 18:55:26.832286   19130 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 18:55:26.832431   19130 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 18:55:26.832543   19130 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 18:55:26.832660   19130 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 18:55:26.832750   19130 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 18:55:26.832910   19130 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 18:55:26.832947   19130 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 18:55:26.832986   19130 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 18:55:26.832995   19130 kubeadm.go:310] 
	I1001 18:55:26.833047   19130 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 18:55:26.833052   19130 kubeadm.go:310] 
	I1001 18:55:26.833147   19130 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 18:55:26.833161   19130 kubeadm.go:310] 
	I1001 18:55:26.833183   19130 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 18:55:26.833231   19130 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 18:55:26.833281   19130 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 18:55:26.833299   19130 kubeadm.go:310] 
	I1001 18:55:26.833347   19130 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 18:55:26.833353   19130 kubeadm.go:310] 
	I1001 18:55:26.833395   19130 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 18:55:26.833401   19130 kubeadm.go:310] 
	I1001 18:55:26.833456   19130 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 18:55:26.833520   19130 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 18:55:26.833589   19130 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 18:55:26.833608   19130 kubeadm.go:310] 
	I1001 18:55:26.833689   19130 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 18:55:26.833800   19130 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 18:55:26.833809   19130 kubeadm.go:310] 
	I1001 18:55:26.833909   19130 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token szuwwh.2qeffcf97dxqsrg4 \
	I1001 18:55:26.834032   19130 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 \
	I1001 18:55:26.834063   19130 kubeadm.go:310] 	--control-plane 
	I1001 18:55:26.834072   19130 kubeadm.go:310] 
	I1001 18:55:26.834181   19130 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 18:55:26.834189   19130 kubeadm.go:310] 
	I1001 18:55:26.834264   19130 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token szuwwh.2qeffcf97dxqsrg4 \
	I1001 18:55:26.834362   19130 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 
	I1001 18:55:26.834371   19130 cni.go:84] Creating CNI manager for ""
	I1001 18:55:26.834377   19130 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 18:55:26.835560   19130 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 18:55:26.836557   19130 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 18:55:26.846776   19130 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 18:55:26.864849   19130 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 18:55:26.864941   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:55:26.864965   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-800266 minikube.k8s.io/updated_at=2024_10_01T18_55_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=addons-800266 minikube.k8s.io/primary=true
	I1001 18:55:26.899023   19130 ops.go:34] apiserver oom_adj: -16
	I1001 18:55:27.002107   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:55:27.502522   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:55:28.002690   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:55:28.502745   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:55:29.002933   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:55:29.502748   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:55:30.002556   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:55:30.502899   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:55:31.002194   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:55:31.502428   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:55:31.665980   19130 kubeadm.go:1113] duration metric: took 4.801106914s to wait for elevateKubeSystemPrivileges
	I1001 18:55:31.666017   19130 kubeadm.go:394] duration metric: took 14.839266983s to StartCluster
	I1001 18:55:31.666042   19130 settings.go:142] acquiring lock: {Name:mkeb3fe64ed992373f048aef24eb0d675bcab60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:31.666197   19130 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 18:55:31.666705   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/kubeconfig: {Name:mk7ef19d546928001abf2478a3abe1d17765a591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:31.666985   19130 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 18:55:31.667015   19130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 18:55:31.667075   19130 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1001 18:55:31.667205   19130 config.go:182] Loaded profile config "addons-800266": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 18:55:31.667216   19130 addons.go:69] Setting yakd=true in profile "addons-800266"
	I1001 18:55:31.667235   19130 addons.go:234] Setting addon yakd=true in "addons-800266"
	I1001 18:55:31.667241   19130 addons.go:69] Setting ingress-dns=true in profile "addons-800266"
	I1001 18:55:31.667261   19130 addons.go:69] Setting metrics-server=true in profile "addons-800266"
	I1001 18:55:31.667258   19130 addons.go:69] Setting registry=true in profile "addons-800266"
	I1001 18:55:31.667281   19130 addons.go:69] Setting storage-provisioner=true in profile "addons-800266"
	I1001 18:55:31.667283   19130 addons.go:69] Setting inspektor-gadget=true in profile "addons-800266"
	I1001 18:55:31.667288   19130 addons.go:234] Setting addon registry=true in "addons-800266"
	I1001 18:55:31.667291   19130 addons.go:69] Setting ingress=true in profile "addons-800266"
	I1001 18:55:31.667295   19130 addons.go:234] Setting addon inspektor-gadget=true in "addons-800266"
	I1001 18:55:31.667304   19130 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-800266"
	I1001 18:55:31.667314   19130 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-800266"
	I1001 18:55:31.667319   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.667326   19130 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-800266"
	I1001 18:55:31.667329   19130 addons.go:69] Setting volcano=true in profile "addons-800266"
	I1001 18:55:31.667334   19130 addons.go:69] Setting volumesnapshots=true in profile "addons-800266"
	I1001 18:55:31.667340   19130 addons.go:234] Setting addon volcano=true in "addons-800266"
	I1001 18:55:31.667348   19130 addons.go:234] Setting addon volumesnapshots=true in "addons-800266"
	I1001 18:55:31.667358   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.667359   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.667369   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.667273   19130 addons.go:69] Setting cloud-spanner=true in profile "addons-800266"
	I1001 18:55:31.667443   19130 addons.go:234] Setting addon cloud-spanner=true in "addons-800266"
	I1001 18:55:31.667479   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.667786   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.667793   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.667804   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.667319   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.667294   19130 addons.go:234] Setting addon storage-provisioner=true in "addons-800266"
	I1001 18:55:31.667834   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.667836   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.667854   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.667858   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.667899   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.667925   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.667304   19130 addons.go:234] Setting addon ingress=true in "addons-800266"
	I1001 18:55:31.668022   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.667282   19130 addons.go:69] Setting gcp-auth=true in profile "addons-800266"
	I1001 18:55:31.668118   19130 mustload.go:65] Loading cluster: addons-800266
	I1001 18:55:31.668140   19130 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-800266"
	I1001 18:55:31.668180   19130 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-800266"
	I1001 18:55:31.668195   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.668212   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.668224   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.668282   19130 config.go:182] Loaded profile config "addons-800266": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 18:55:31.668297   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.668320   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.668382   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.668408   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.668567   19130 addons.go:69] Setting default-storageclass=true in profile "addons-800266"
	I1001 18:55:31.668582   19130 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-800266"
	I1001 18:55:31.668641   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.668642   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.668660   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.668666   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.667273   19130 addons.go:234] Setting addon metrics-server=true in "addons-800266"
	I1001 18:55:31.667825   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.667323   19130 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-800266"
	I1001 18:55:31.667269   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.668993   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.669064   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.669712   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.669746   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.667272   19130 addons.go:234] Setting addon ingress-dns=true in "addons-800266"
	I1001 18:55:31.670244   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.670634   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.670662   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.673142   19130 out.go:177] * Verifying Kubernetes components...
	I1001 18:55:31.673329   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.673415   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.673925   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.673984   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.680503   19130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:55:31.690767   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37929
	I1001 18:55:31.690841   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36693
	I1001 18:55:31.691179   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33717
	I1001 18:55:31.691425   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36565
	I1001 18:55:31.691509   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41113
	I1001 18:55:31.691999   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43565
	I1001 18:55:31.692009   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.692014   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.692546   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.692574   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.692718   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.692951   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.692968   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.692973   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.693349   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.693596   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.693631   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.693888   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.693917   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.693892   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.694274   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.694420   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.694442   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.694466   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.695332   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.708725   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.708763   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.708924   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.708950   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.709011   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.709035   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.710378   19130 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-800266"
	I1001 18:55:31.710421   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.710785   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.710822   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.713056   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.713504   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.713704   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.713729   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.714113   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.714392   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.714417   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.714818   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.721857   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I1001 18:55:31.722251   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.722778   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.722799   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.723264   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.723854   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.723974   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.732891   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.732943   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.733023   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.733042   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.750470   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37989
	I1001 18:55:31.750860   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45171
	I1001 18:55:31.750954   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.751020   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42719
	I1001 18:55:31.751096   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39129
	I1001 18:55:31.751387   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.751495   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.751965   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.751986   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.752116   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.752130   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.752245   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.752255   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.752714   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.752725   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.752775   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.753346   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.753386   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.753604   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.753660   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.754012   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.754048   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.754248   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.754272   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.754651   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.755197   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.755236   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.756270   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.756424   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35871
	I1001 18:55:31.756875   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.757392   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.757407   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.757744   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.758258   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.758281   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.758956   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43089
	I1001 18:55:31.759011   19130 out.go:177]   - Using image docker.io/registry:2.8.3
	I1001 18:55:31.759203   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43281
	I1001 18:55:31.759320   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.759794   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.759813   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.759874   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.760288   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.760512   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.761400   19130 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1001 18:55:31.761711   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.761732   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.762176   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.762361   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.762409   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.762492   19130 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1001 18:55:31.762507   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1001 18:55:31.762526   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.764085   19130 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1001 18:55:31.765055   19130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1001 18:55:31.765077   19130 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1001 18:55:31.765102   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.768558   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.768570   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40601
	I1001 18:55:31.769998   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.770715   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40553
	I1001 18:55:31.770719   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.770742   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.770795   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.770918   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44381
	I1001 18:55:31.771091   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.771095   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.771284   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.771357   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.771587   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.771607   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.771680   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.771695   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.771795   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.771806   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.771939   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.771957   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.771999   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.772122   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:31.772185   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.772259   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.772444   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.772459   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.772528   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.772659   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:31.772783   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.772817   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.773266   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.773346   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.774093   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.775569   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42809
	I1001 18:55:31.775966   19130 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 18:55:31.776142   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.776773   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.776791   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.776854   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.777479   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.777511   19130 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 18:55:31.777519   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.777527   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 18:55:31.777545   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.778182   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.778536   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.781407   19130 addons.go:234] Setting addon default-storageclass=true in "addons-800266"
	I1001 18:55:31.781453   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.781814   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.781847   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.782044   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.782391   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33453
	I1001 18:55:31.782586   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.782602   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.782706   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.782814   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.782971   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.783183   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.783328   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.783340   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.783534   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:31.785471   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41145
	I1001 18:55:31.785942   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.786002   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.786199   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.788243   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.788885   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.788905   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.789786   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.790107   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.790184   19130 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1001 18:55:31.790802   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35599
	I1001 18:55:31.791060   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39279
	I1001 18:55:31.791431   19130 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 18:55:31.791449   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.791450   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1001 18:55:31.791501   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.791893   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.792463   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.792481   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.793038   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.793062   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.793435   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.793679   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.795141   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.795617   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.796067   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.796388   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.796409   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.796641   19130 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 18:55:31.796752   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.796839   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.797190   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.797208   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.797367   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.797514   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:31.797808   19130 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1001 18:55:31.799107   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.799401   19130 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 18:55:31.799502   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.799411   19130 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1001 18:55:31.799553   19130 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1001 18:55:31.799575   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.799539   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.801890   19130 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1001 18:55:31.803010   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.803608   19130 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 18:55:31.803631   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1001 18:55:31.803653   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.803807   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.803837   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.804047   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.804208   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.804345   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.804514   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:31.807571   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39875
	I1001 18:55:31.807618   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.807810   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40893
	I1001 18:55:31.808001   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.808142   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.808159   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.808339   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.808507   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.808602   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.808624   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.808756   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.808754   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:31.808777   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.809193   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.809409   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.811325   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.812025   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.812050   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.812525   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.812789   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.813573   19130 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1001 18:55:31.814543   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.815571   19130 out.go:177]   - Using image docker.io/busybox:stable
	I1001 18:55:31.815583   19130 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1001 18:55:31.816266   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38477
	I1001 18:55:31.816467   19130 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1001 18:55:31.816499   19130 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1001 18:55:31.816516   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39245
	I1001 18:55:31.816520   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.816484   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43675
	I1001 18:55:31.816846   19130 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 18:55:31.816873   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1001 18:55:31.816895   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.817006   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.817090   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.817740   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.817771   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.818585   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.818934   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.819099   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35865
	I1001 18:55:31.819276   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.819590   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.819765   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.819786   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.820123   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.820329   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.820350   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.820657   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.821027   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.821106   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.821584   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.821716   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.821753   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.821767   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.821959   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.821976   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.822087   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.822248   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.822557   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.822654   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.822922   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.822957   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.823011   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.823182   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.823181   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:31.823267   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41147
	I1001 18:55:31.823490   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.823669   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:31.823820   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.823894   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.823913   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.823914   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.824381   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.824399   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.824778   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.825303   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.825339   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.825440   19130 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1001 18:55:31.825495   19130 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1001 18:55:31.825606   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.825868   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:31.825883   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:31.826597   19130 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1001 18:55:31.826610   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1001 18:55:31.826626   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.827319   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45937
	I1001 18:55:31.827337   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.827394   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:31.827414   19130 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1001 18:55:31.827439   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:31.827816   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:31.827827   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:31.827838   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:31.827971   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.828393   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:31.828422   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:31.828494   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.828510   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.828435   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:31.828572   19130 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	W1001 18:55:31.828638   19130 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1001 18:55:31.828870   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.829101   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.829574   19130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1001 18:55:31.829669   19130 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 18:55:31.829688   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1001 18:55:31.829707   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.831464   19130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1001 18:55:31.831630   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.831709   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.832455   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.832477   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.832670   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.832798   19130 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1001 18:55:31.832801   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.833041   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.833153   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:31.833750   19130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1001 18:55:31.833828   19130 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1001 18:55:31.833842   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.833844   19130 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1001 18:55:31.833865   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.833963   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35187
	I1001 18:55:31.834223   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.834237   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.834430   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.834564   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.834635   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.834723   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.834862   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:31.835391   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.835404   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.835607   19130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1001 18:55:31.835785   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.835994   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.837304   19130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1001 18:55:31.837401   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.837418   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.837432   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.837568   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.837746   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.837931   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.838076   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:31.839309   19130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1001 18:55:31.840143   19130 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1001 18:55:31.840159   19130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1001 18:55:31.840174   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.844430   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.844472   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.844492   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.844505   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.844607   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.844764   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.844898   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	W1001 18:55:31.849821   19130 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60608->192.168.39.56:22: read: connection reset by peer
	I1001 18:55:31.849857   19130 retry.go:31] will retry after 189.152368ms: ssh: handshake failed: read tcp 192.168.39.1:60608->192.168.39.56:22: read: connection reset by peer
	I1001 18:55:31.852259   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41861
	I1001 18:55:31.852851   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.853383   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.853403   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.853754   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.853943   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.855971   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.856197   19130 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 18:55:31.856216   19130 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 18:55:31.856237   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.859336   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.859786   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.859811   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.860005   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.860172   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.860318   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.860466   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:32.151512   19130 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1001 18:55:32.151546   19130 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1001 18:55:32.221551   19130 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 18:55:32.221638   19130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 18:55:32.276121   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 18:55:32.280552   19130 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1001 18:55:32.280576   19130 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1001 18:55:32.305872   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 18:55:32.308237   19130 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1001 18:55:32.308261   19130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1001 18:55:32.327159   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 18:55:32.334239   19130 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1001 18:55:32.334260   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1001 18:55:32.335955   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1001 18:55:32.353745   19130 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1001 18:55:32.353788   19130 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1001 18:55:32.358636   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 18:55:32.364700   19130 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1001 18:55:32.364719   19130 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1001 18:55:32.365506   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 18:55:32.367135   19130 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1001 18:55:32.367153   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1001 18:55:32.381753   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 18:55:32.516910   19130 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1001 18:55:32.516943   19130 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1001 18:55:32.536224   19130 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1001 18:55:32.536252   19130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1001 18:55:32.546478   19130 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1001 18:55:32.546506   19130 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1001 18:55:32.554299   19130 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1001 18:55:32.554336   19130 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1001 18:55:32.573148   19130 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1001 18:55:32.573171   19130 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1001 18:55:32.584795   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1001 18:55:32.687156   19130 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1001 18:55:32.687187   19130 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1001 18:55:32.705017   19130 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1001 18:55:32.705040   19130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1001 18:55:32.785218   19130 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1001 18:55:32.785242   19130 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1001 18:55:32.797466   19130 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 18:55:32.797492   19130 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1001 18:55:32.853214   19130 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1001 18:55:32.853243   19130 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1001 18:55:32.965364   19130 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1001 18:55:32.965390   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1001 18:55:33.018514   19130 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1001 18:55:33.018542   19130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1001 18:55:33.080376   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 18:55:33.100949   19130 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1001 18:55:33.100979   19130 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1001 18:55:33.141589   19130 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1001 18:55:33.141619   19130 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1001 18:55:33.173678   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1001 18:55:33.269056   19130 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1001 18:55:33.269091   19130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1001 18:55:33.357862   19130 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1001 18:55:33.357891   19130 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1001 18:55:33.391029   19130 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 18:55:33.391052   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1001 18:55:33.454079   19130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1001 18:55:33.454101   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1001 18:55:33.605046   19130 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1001 18:55:33.605076   19130 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1001 18:55:33.753379   19130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1001 18:55:33.753409   19130 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1001 18:55:33.771591   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 18:55:33.871945   19130 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1001 18:55:33.871974   19130 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1001 18:55:33.923301   19130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1001 18:55:33.923324   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1001 18:55:33.993955   19130 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1001 18:55:33.993979   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1001 18:55:34.088724   19130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1001 18:55:34.088747   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1001 18:55:34.236254   19130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 18:55:34.236286   19130 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1001 18:55:34.352717   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1001 18:55:34.471689   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 18:55:34.508854   19130 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.287176493s)
	I1001 18:55:34.508900   19130 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1001 18:55:34.508913   19130 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.28732779s)
	I1001 18:55:34.509641   19130 node_ready.go:35] waiting up to 6m0s for node "addons-800266" to be "Ready" ...
	I1001 18:55:34.516880   19130 node_ready.go:49] node "addons-800266" has status "Ready":"True"
	I1001 18:55:34.516926   19130 node_ready.go:38] duration metric: took 7.250218ms for node "addons-800266" to be "Ready" ...
	I1001 18:55:34.516937   19130 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 18:55:34.529252   19130 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g6xbn" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:35.018917   19130 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-800266" context rescaled to 1 replicas
	I1001 18:55:35.625874   19130 pod_ready.go:93] pod "coredns-7c65d6cfc9-g6xbn" in "kube-system" namespace has status "Ready":"True"
	I1001 18:55:35.625897   19130 pod_ready.go:82] duration metric: took 1.096620077s for pod "coredns-7c65d6cfc9-g6xbn" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:35.625906   19130 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-h656l" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:35.993012   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.716854427s)
	I1001 18:55:35.993071   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:35.993084   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:35.993091   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.687174378s)
	I1001 18:55:35.993140   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:35.993156   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:35.993497   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:35.993504   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:35.993520   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:35.993530   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:35.993531   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:35.993543   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:35.993552   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:35.993566   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:35.993578   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:35.993600   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:35.993922   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:35.993953   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:35.993962   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:35.993992   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:35.994016   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:36.894838   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.567639407s)
	I1001 18:55:36.894882   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.558908033s)
	I1001 18:55:36.894907   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:36.894907   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:36.894919   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:36.894923   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:36.894938   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.5362666s)
	I1001 18:55:36.894978   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:36.894993   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:36.895309   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:36.895311   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:36.895325   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:36.895328   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:36.895340   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:36.895343   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:36.895327   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:36.895353   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:36.895349   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:36.895359   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:36.895375   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:36.895384   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:36.895405   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:36.895384   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:36.895425   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:36.897378   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:36.897390   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:36.897388   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:36.897472   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:36.897399   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:36.897493   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:36.897420   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:36.897426   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:36.897561   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:36.990964   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:36.990984   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:36.991265   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:36.991279   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:36.991304   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:37.197619   19130 pod_ready.go:93] pod "coredns-7c65d6cfc9-h656l" in "kube-system" namespace has status "Ready":"True"
	I1001 18:55:37.197646   19130 pod_ready.go:82] duration metric: took 1.571733309s for pod "coredns-7c65d6cfc9-h656l" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:37.197656   19130 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-800266" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:37.230347   19130 pod_ready.go:93] pod "etcd-addons-800266" in "kube-system" namespace has status "Ready":"True"
	I1001 18:55:37.230370   19130 pod_ready.go:82] duration metric: took 32.707875ms for pod "etcd-addons-800266" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:37.230383   19130 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-800266" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:37.293032   19130 pod_ready.go:93] pod "kube-apiserver-addons-800266" in "kube-system" namespace has status "Ready":"True"
	I1001 18:55:37.293059   19130 pod_ready.go:82] duration metric: took 62.668736ms for pod "kube-apiserver-addons-800266" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:37.293072   19130 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-800266" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:37.312542   19130 pod_ready.go:93] pod "kube-controller-manager-addons-800266" in "kube-system" namespace has status "Ready":"True"
	I1001 18:55:37.312568   19130 pod_ready.go:82] duration metric: took 19.487958ms for pod "kube-controller-manager-addons-800266" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:37.312579   19130 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x9xtt" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:37.328585   19130 pod_ready.go:93] pod "kube-proxy-x9xtt" in "kube-system" namespace has status "Ready":"True"
	I1001 18:55:37.328607   19130 pod_ready.go:82] duration metric: took 16.022038ms for pod "kube-proxy-x9xtt" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:37.328618   19130 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-800266" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:38.852207   19130 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1001 18:55:38.852242   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:38.855173   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:38.855652   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:38.855682   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:38.855897   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:38.856141   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:38.856308   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:38.856469   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:38.915172   19130 pod_ready.go:93] pod "kube-scheduler-addons-800266" in "kube-system" namespace has status "Ready":"True"
	I1001 18:55:38.915196   19130 pod_ready.go:82] duration metric: took 1.58657044s for pod "kube-scheduler-addons-800266" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:38.915207   19130 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-brmgb" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:39.079380   19130 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1001 18:55:39.222208   19130 addons.go:234] Setting addon gcp-auth=true in "addons-800266"
	I1001 18:55:39.222261   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:39.222641   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:39.222688   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:39.238651   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42851
	I1001 18:55:39.239165   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:39.239709   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:39.239725   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:39.240016   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:39.240467   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:39.240518   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:39.256916   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41449
	I1001 18:55:39.257474   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:39.257960   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:39.257979   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:39.258374   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:39.258590   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:39.260194   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:39.260431   19130 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1001 18:55:39.260459   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:39.263038   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:39.263415   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:39.263442   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:39.263612   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:39.263788   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:39.263953   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:39.264105   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:39.514349   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.148811867s)
	I1001 18:55:39.514407   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.514405   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.132620731s)
	I1001 18:55:39.514421   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.514441   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.514456   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.514465   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.929632059s)
	I1001 18:55:39.514502   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.514515   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.514697   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.514711   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.514719   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.514718   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.434311584s)
	I1001 18:55:39.514726   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.514742   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.514753   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.514838   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.341132511s)
	I1001 18:55:39.514859   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.514871   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.514888   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.514914   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.514922   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.514928   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.515233   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:39.515283   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.515291   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.515354   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.743724399s)
	W1001 18:55:39.515383   19130 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 18:55:39.515414   19130 retry.go:31] will retry after 247.380756ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 18:55:39.515508   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.162744865s)
	I1001 18:55:39.515574   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.515599   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.515657   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:39.515696   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:39.515720   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.515726   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.515735   19130 addons.go:475] Verifying addon ingress=true in "addons-800266"
	I1001 18:55:39.515952   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:39.515999   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.516007   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.516015   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.516025   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.516344   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.516524   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.516539   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.516546   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.516718   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:39.516763   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.516770   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.516780   19130 addons.go:475] Verifying addon registry=true in "addons-800266"
	I1001 18:55:39.516906   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.518843   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.518855   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.516933   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:39.518862   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.516957   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.518879   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.518888   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.518895   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.516971   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:39.516998   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.518959   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.519087   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:39.519114   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:39.519148   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.519155   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.519386   19130 out.go:177] * Verifying ingress addon...
	I1001 18:55:39.519662   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.519675   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.519692   19130 addons.go:475] Verifying addon metrics-server=true in "addons-800266"
	I1001 18:55:39.520324   19130 out.go:177] * Verifying registry addon...
	I1001 18:55:39.520335   19130 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-800266 service yakd-dashboard -n yakd-dashboard
	
	I1001 18:55:39.521344   19130 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1001 18:55:39.522166   19130 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1001 18:55:39.556888   19130 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1001 18:55:39.556912   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:39.557314   19130 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1001 18:55:39.557334   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:39.573325   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.573345   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.573629   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.573647   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.763251   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 18:55:40.027429   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:40.029494   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:40.258774   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.787032566s)
	I1001 18:55:40.258827   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:40.258851   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:40.259133   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:40.259170   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:40.259189   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:40.259201   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:40.259480   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:40.259569   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:40.259584   19130 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-800266"
	I1001 18:55:40.259549   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:40.260227   19130 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 18:55:40.260885   19130 out.go:177] * Verifying csi-hostpath-driver addon...
	I1001 18:55:40.262290   19130 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1001 18:55:40.263285   19130 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1001 18:55:40.263689   19130 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1001 18:55:40.263708   19130 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1001 18:55:40.273519   19130 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1001 18:55:40.273549   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:40.352400   19130 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1001 18:55:40.352429   19130 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1001 18:55:40.419876   19130 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 18:55:40.419902   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1001 18:55:40.457274   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 18:55:40.531943   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:40.532048   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:40.849941   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:40.923365   19130 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-brmgb" in "kube-system" namespace has status "Ready":"False"
	I1001 18:55:41.026416   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:41.026419   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:41.269269   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:41.455114   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.691821994s)
	I1001 18:55:41.455174   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:41.455188   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:41.455536   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:41.455553   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:41.455559   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:41.455566   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:41.455540   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:41.455831   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:41.455849   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:41.526226   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:41.526896   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:41.784716   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:41.796016   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.338692765s)
	I1001 18:55:41.796066   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:41.796078   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:41.796355   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:41.796416   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:41.796434   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:41.796448   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:41.796481   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:41.796724   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:41.796778   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:41.796792   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:41.797764   19130 addons.go:475] Verifying addon gcp-auth=true in "addons-800266"
	I1001 18:55:41.799079   19130 out.go:177] * Verifying gcp-auth addon...
	I1001 18:55:41.801295   19130 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1001 18:55:41.883115   19130 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1001 18:55:41.883137   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:42.027005   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:42.027418   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:42.279289   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:42.317026   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:42.526596   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:42.528130   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:42.768120   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:42.805316   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:43.027053   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:43.027083   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:43.268570   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:43.304394   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:43.421742   19130 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-brmgb" in "kube-system" namespace has status "Ready":"False"
	I1001 18:55:43.525731   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:43.526353   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:43.769000   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:43.805564   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:44.026267   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:44.027291   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:44.268850   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:44.305822   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:44.527864   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:44.529119   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:44.768724   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:44.805493   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:45.026135   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:45.027064   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:45.269201   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:45.306677   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:45.526185   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:45.527696   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:45.768603   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:45.808120   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:45.921864   19130 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-brmgb" in "kube-system" namespace has status "Ready":"False"
	I1001 18:55:46.026901   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:46.028431   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:46.268786   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:46.305022   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:46.526101   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:46.527828   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:46.767884   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:46.805178   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:47.195355   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:47.196523   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:47.268920   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:47.305320   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:47.525987   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:47.526412   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:47.768105   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:47.805635   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:47.921962   19130 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-brmgb" in "kube-system" namespace has status "Ready":"False"
	I1001 18:55:48.025605   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:48.026681   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:48.267626   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:48.304946   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:48.527443   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:48.528136   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:48.768209   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:48.805759   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:49.027381   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:49.027882   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:49.269578   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:49.304657   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:49.526090   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:49.526787   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:49.768182   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:49.805094   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:49.921491   19130 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-brmgb" in "kube-system" namespace has status "Ready":"True"
	I1001 18:55:49.921516   19130 pod_ready.go:82] duration metric: took 11.006302036s for pod "nvidia-device-plugin-daemonset-brmgb" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:49.921526   19130 pod_ready.go:39] duration metric: took 15.404576906s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 18:55:49.921545   19130 api_server.go:52] waiting for apiserver process to appear ...
	I1001 18:55:49.921607   19130 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:55:49.939781   19130 api_server.go:72] duration metric: took 18.272755689s to wait for apiserver process to appear ...
	I1001 18:55:49.939808   19130 api_server.go:88] waiting for apiserver healthz status ...
	I1001 18:55:49.939834   19130 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I1001 18:55:49.944768   19130 api_server.go:279] https://192.168.39.56:8443/healthz returned 200:
	ok
	I1001 18:55:49.945803   19130 api_server.go:141] control plane version: v1.31.1
	I1001 18:55:49.945823   19130 api_server.go:131] duration metric: took 6.00747ms to wait for apiserver health ...
	I1001 18:55:49.945832   19130 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 18:55:49.954117   19130 system_pods.go:59] 17 kube-system pods found
	I1001 18:55:49.954160   19130 system_pods.go:61] "coredns-7c65d6cfc9-h656l" [1cf425bf-e9a1-4f2b-98e3-38dc3f94625d] Running
	I1001 18:55:49.954169   19130 system_pods.go:61] "csi-hostpath-attacher-0" [7a3746e4-0f9e-4707-8c0f-a2102389ae24] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1001 18:55:49.954175   19130 system_pods.go:61] "csi-hostpath-resizer-0" [56f788c0-c09f-459b-8f37-4bc5cbc483ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1001 18:55:49.954183   19130 system_pods.go:61] "csi-hostpathplugin-jc2wz" [22221d1d-2188-4e3c-a522-e2b0dd98aa60] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1001 18:55:49.954188   19130 system_pods.go:61] "etcd-addons-800266" [1f78a1eb-6c5c-4021-9dc7-d952fce79496] Running
	I1001 18:55:49.954192   19130 system_pods.go:61] "kube-apiserver-addons-800266" [a8e4d043-4ab5-4596-9103-98f447af4070] Running
	I1001 18:55:49.954196   19130 system_pods.go:61] "kube-controller-manager-addons-800266" [344b2879-14c9-4e92-a4f9-394055ad3082] Running
	I1001 18:55:49.954201   19130 system_pods.go:61] "kube-ingress-dns-minikube" [c841f466-ff18-4ddc-8a0c-d01d392f05e4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1001 18:55:49.954207   19130 system_pods.go:61] "kube-proxy-x9xtt" [f0d9fe6d-e5fb-43c9-b8d3-8075cec0186a] Running
	I1001 18:55:49.954211   19130 system_pods.go:61] "kube-scheduler-addons-800266" [47ca10e7-9913-404a-b5a0-cef41f056ead] Running
	I1001 18:55:49.954219   19130 system_pods.go:61] "metrics-server-84c5f94fbc-7mp6j" [f319c15f-c9b0-400d-89b5-d388e9a49218] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 18:55:49.954223   19130 system_pods.go:61] "nvidia-device-plugin-daemonset-brmgb" [8958de05-2c3e-499b-9290-48c68cef124f] Running
	I1001 18:55:49.954228   19130 system_pods.go:61] "registry-66c9cd494c-s7g57" [973537c4-844f-4bcc-addb-882999c8dbbe] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1001 18:55:49.954233   19130 system_pods.go:61] "registry-proxy-tpcpz" [41439ce9-e054-4a4f-ab24-294daf5ce65a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1001 18:55:49.954241   19130 system_pods.go:61] "snapshot-controller-56fcc65765-6kh72" [4448db04-0896-4ccc-a4ea-eeaa1f1670a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 18:55:49.954249   19130 system_pods.go:61] "snapshot-controller-56fcc65765-d7cj7" [78339872-e21b-4348-9374-e13f9b6d4884] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 18:55:49.954252   19130 system_pods.go:61] "storage-provisioner" [03188f24-2d63-42be-9351-a533a36261f1] Running
	I1001 18:55:49.954258   19130 system_pods.go:74] duration metric: took 8.420329ms to wait for pod list to return data ...
	I1001 18:55:49.954265   19130 default_sa.go:34] waiting for default service account to be created ...
	I1001 18:55:49.956804   19130 default_sa.go:45] found service account: "default"
	I1001 18:55:49.956826   19130 default_sa.go:55] duration metric: took 2.554185ms for default service account to be created ...
	I1001 18:55:49.956835   19130 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 18:55:49.963541   19130 system_pods.go:86] 17 kube-system pods found
	I1001 18:55:49.963568   19130 system_pods.go:89] "coredns-7c65d6cfc9-h656l" [1cf425bf-e9a1-4f2b-98e3-38dc3f94625d] Running
	I1001 18:55:49.963575   19130 system_pods.go:89] "csi-hostpath-attacher-0" [7a3746e4-0f9e-4707-8c0f-a2102389ae24] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1001 18:55:49.963582   19130 system_pods.go:89] "csi-hostpath-resizer-0" [56f788c0-c09f-459b-8f37-4bc5cbc483ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1001 18:55:49.963589   19130 system_pods.go:89] "csi-hostpathplugin-jc2wz" [22221d1d-2188-4e3c-a522-e2b0dd98aa60] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1001 18:55:49.963594   19130 system_pods.go:89] "etcd-addons-800266" [1f78a1eb-6c5c-4021-9dc7-d952fce79496] Running
	I1001 18:55:49.963599   19130 system_pods.go:89] "kube-apiserver-addons-800266" [a8e4d043-4ab5-4596-9103-98f447af4070] Running
	I1001 18:55:49.963602   19130 system_pods.go:89] "kube-controller-manager-addons-800266" [344b2879-14c9-4e92-a4f9-394055ad3082] Running
	I1001 18:55:49.963608   19130 system_pods.go:89] "kube-ingress-dns-minikube" [c841f466-ff18-4ddc-8a0c-d01d392f05e4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1001 18:55:49.963611   19130 system_pods.go:89] "kube-proxy-x9xtt" [f0d9fe6d-e5fb-43c9-b8d3-8075cec0186a] Running
	I1001 18:55:49.963614   19130 system_pods.go:89] "kube-scheduler-addons-800266" [47ca10e7-9913-404a-b5a0-cef41f056ead] Running
	I1001 18:55:49.963630   19130 system_pods.go:89] "metrics-server-84c5f94fbc-7mp6j" [f319c15f-c9b0-400d-89b5-d388e9a49218] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 18:55:49.963636   19130 system_pods.go:89] "nvidia-device-plugin-daemonset-brmgb" [8958de05-2c3e-499b-9290-48c68cef124f] Running
	I1001 18:55:49.963642   19130 system_pods.go:89] "registry-66c9cd494c-s7g57" [973537c4-844f-4bcc-addb-882999c8dbbe] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1001 18:55:49.963650   19130 system_pods.go:89] "registry-proxy-tpcpz" [41439ce9-e054-4a4f-ab24-294daf5ce65a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1001 18:55:49.963655   19130 system_pods.go:89] "snapshot-controller-56fcc65765-6kh72" [4448db04-0896-4ccc-a4ea-eeaa1f1670a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 18:55:49.963661   19130 system_pods.go:89] "snapshot-controller-56fcc65765-d7cj7" [78339872-e21b-4348-9374-e13f9b6d4884] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 18:55:49.963665   19130 system_pods.go:89] "storage-provisioner" [03188f24-2d63-42be-9351-a533a36261f1] Running
	I1001 18:55:49.963672   19130 system_pods.go:126] duration metric: took 6.831591ms to wait for k8s-apps to be running ...
	I1001 18:55:49.963680   19130 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 18:55:49.963721   19130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 18:55:49.977922   19130 system_svc.go:56] duration metric: took 14.233798ms WaitForService to wait for kubelet
	I1001 18:55:49.977958   19130 kubeadm.go:582] duration metric: took 18.3109378s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 18:55:49.977977   19130 node_conditions.go:102] verifying NodePressure condition ...
	I1001 18:55:49.980894   19130 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 18:55:49.980926   19130 node_conditions.go:123] node cpu capacity is 2
	I1001 18:55:49.980946   19130 node_conditions.go:105] duration metric: took 2.963511ms to run NodePressure ...
	I1001 18:55:49.980961   19130 start.go:241] waiting for startup goroutines ...
	I1001 18:55:50.025756   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:50.026668   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:50.267669   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:50.304468   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:50.526075   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:50.526326   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:50.768807   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:50.805323   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:51.025572   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:51.026351   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:51.268074   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:51.305768   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:51.526059   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:51.526376   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:51.768501   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:51.805013   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:52.025541   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:52.025820   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:52.268174   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:52.305310   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:52.525865   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:52.526118   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:52.767743   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:52.804987   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:53.026311   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:53.026725   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:53.269220   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:53.305447   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:53.528776   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:53.529549   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:53.768687   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:53.805127   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:54.027297   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:54.027524   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:54.268151   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:54.305282   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:54.526062   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:54.526337   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:54.767500   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:54.804748   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:55.025675   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:55.026133   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:55.268404   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:55.304648   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:55.526329   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:55.527336   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:55.778761   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:55.874688   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:56.025330   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:56.026334   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:56.269082   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:56.305856   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:56.526451   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:56.528154   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:56.768201   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:56.805826   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:57.027121   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:57.027258   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:57.269172   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:57.304977   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:57.526351   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:57.526590   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:57.768978   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:57.805536   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:58.025659   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:58.026501   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:58.269001   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:58.305286   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:58.526084   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:58.526665   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:58.768429   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:58.804806   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:59.026094   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:59.026304   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:59.268418   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:59.304817   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:59.526515   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:59.526597   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:59.767811   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:59.805239   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:00.040806   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:00.041206   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:00.267008   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:00.306299   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:00.528191   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:00.528624   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:00.767791   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:00.805230   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:01.026832   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:01.026936   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:01.268009   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:01.305171   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:01.526717   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:01.526953   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:01.767418   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:01.805266   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:02.026936   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:02.027047   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:02.267842   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:02.305105   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:02.526845   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:02.526851   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:02.772693   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:02.807597   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:03.025499   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:03.026255   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:03.268684   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:03.304749   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:03.641187   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:03.641280   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:03.775598   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:03.804940   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:04.026931   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:04.027062   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:04.267296   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:04.305269   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:04.526274   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:04.526294   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:04.768554   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:04.805025   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:05.026457   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:05.027195   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:05.267602   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:05.304856   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:05.526109   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:05.526251   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:05.769032   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:05.804837   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:06.025451   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:06.026310   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:06.268089   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:06.305672   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:06.525242   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:06.526963   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:06.768305   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:06.805363   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:07.026589   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:07.026966   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:07.268173   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:07.304970   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:07.525623   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:07.525784   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:07.767875   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:07.804645   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:08.027996   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:08.029425   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:08.268902   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:08.304990   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:08.526506   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:08.527033   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:08.767956   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:08.805408   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:09.026790   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:09.026978   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:09.268334   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:09.304434   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:09.526629   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:09.526792   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:09.767687   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:09.804823   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:10.026143   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:10.026440   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:10.268627   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:10.305235   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:10.525163   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:10.526098   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:10.767674   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:10.805154   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:11.030249   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:11.030313   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:11.267963   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:11.305412   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:11.526600   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:11.526764   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:11.768337   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:11.805284   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:12.026818   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:12.027684   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:12.268085   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:12.304167   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:12.526893   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:12.527141   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:12.767499   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:12.805096   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:13.026871   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:13.027052   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:13.267903   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:13.304506   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:13.525481   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:13.525930   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:13.768076   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:13.805244   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:14.026136   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:14.026299   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:14.267893   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:14.305575   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:14.525873   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:14.526447   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:14.768188   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:14.805374   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:15.026766   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:15.027178   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:15.268704   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:15.305018   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:15.525416   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:15.526553   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:15.769012   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:15.804814   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:16.026829   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:16.027085   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:16.269425   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:16.305420   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:16.525190   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:16.526103   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:16.768230   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:16.804956   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:17.026689   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:17.027097   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:17.270837   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:17.305485   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:17.527106   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:17.527585   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:18.022760   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:18.023643   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:18.026758   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:18.028247   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:18.268167   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:18.305673   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:18.525578   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:18.526126   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:18.770456   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:18.804702   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:19.025774   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:19.026670   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:19.267283   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:19.304967   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:19.527181   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:19.527743   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:19.768368   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:19.804113   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:20.025739   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:20.026338   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:20.268440   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:20.304515   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:20.526543   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:20.526761   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:20.767899   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:20.805675   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:21.026967   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:21.028151   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:21.267897   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:21.304590   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:21.525719   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:21.527000   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:21.769464   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:21.868969   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:22.025930   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:22.026812   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:22.272213   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:22.305545   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:22.526241   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:22.526287   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:22.768226   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:22.804532   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:23.025816   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:23.026215   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:23.268463   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:23.304826   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:23.525776   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:23.526678   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:23.767547   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:23.805480   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:24.026894   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:24.027382   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:24.269916   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:24.305459   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:24.525644   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:24.527847   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:24.769044   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:24.805086   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:25.027057   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:25.027395   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:25.269294   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:25.304979   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:25.526357   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:25.527753   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:25.926826   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:25.927171   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:26.025449   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:26.026644   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:26.268268   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:26.306368   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:26.526496   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:26.526542   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:26.768258   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:26.804830   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:27.026617   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:27.027189   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:27.269102   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:27.310482   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:27.527333   19130 kapi.go:107] duration metric: took 48.005165013s to wait for kubernetes.io/minikube-addons=registry ...
	I1001 18:56:27.527541   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:27.768019   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:27.806508   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:28.028855   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:28.271037   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:28.314347   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:28.527469   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:28.769253   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:28.804846   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:29.026066   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:29.267391   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:29.304223   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:29.525839   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:29.770180   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:29.808249   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:30.028910   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:30.268603   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:30.312856   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:30.528793   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:30.769012   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:30.805914   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:31.025993   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:31.269924   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:31.304937   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:31.824225   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:31.824538   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:31.830941   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:32.025381   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:32.268065   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:32.305572   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:32.526896   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:32.768054   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:32.805263   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:33.030654   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:33.268552   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:33.304739   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:33.526564   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:33.768756   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:33.869132   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:34.025670   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:34.268822   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:34.305162   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:34.525913   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:34.767654   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:34.805279   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:35.025946   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:35.272489   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:35.304596   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:35.528585   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:35.768740   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:35.805002   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:36.027335   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:36.268329   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:36.304598   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:36.526023   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:36.770703   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:36.804740   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:37.026529   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:37.271591   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:37.371300   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:37.526128   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:37.767646   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:37.804783   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:38.025956   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:38.267638   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:38.304989   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:38.527026   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:38.767635   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:38.805865   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:39.025525   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:39.275749   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:39.309611   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:39.525778   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:39.768286   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:39.808791   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:40.034899   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:40.270158   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:40.306807   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:40.526250   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:40.768782   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:40.804708   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:41.025318   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:41.268327   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:41.305404   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:41.525848   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:41.767472   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:41.804589   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:42.025320   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:42.268125   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:42.305670   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:42.527778   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:42.767606   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:42.805074   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:43.026042   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:43.269285   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:43.304533   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:43.526139   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:43.767981   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:43.805325   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:44.029869   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:44.268527   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:44.305512   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:44.526188   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:44.770555   19130 kapi.go:107] duration metric: took 1m4.507269266s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1001 18:56:44.870080   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:45.027086   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:45.305406   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:45.525742   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:45.806902   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:46.026078   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:46.306409   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:46.526624   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:46.805889   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:47.027251   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:47.305110   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:47.526029   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:47.804997   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:48.025758   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:48.306321   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:48.526908   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:48.804640   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:49.025097   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:49.304456   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:49.525324   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:50.163855   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:50.164295   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:50.305538   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:50.525692   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:50.804560   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:51.025970   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:51.304754   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:51.527386   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:51.805556   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:52.025720   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:52.305190   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:52.526463   19130 kapi.go:107] duration metric: took 1m13.005117219s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1001 18:56:52.805311   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:53.369602   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:53.805211   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:54.306067   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:54.805885   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:55.306994   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:55.805664   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:56.312311   19130 kapi.go:107] duration metric: took 1m14.511016705s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1001 18:56:56.314047   19130 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-800266 cluster.
	I1001 18:56:56.315213   19130 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1001 18:56:56.316366   19130 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1001 18:56:56.317719   19130 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, nvidia-device-plugin, cloud-spanner, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1001 18:56:56.318879   19130 addons.go:510] duration metric: took 1m24.651803136s for enable addons: enabled=[ingress-dns storage-provisioner nvidia-device-plugin cloud-spanner storage-provisioner-rancher inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1001 18:56:56.318921   19130 start.go:246] waiting for cluster config update ...
	I1001 18:56:56.318939   19130 start.go:255] writing updated cluster config ...
	I1001 18:56:56.319187   19130 ssh_runner.go:195] Run: rm -f paused
	I1001 18:56:56.372853   19130 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 18:56:56.374326   19130 out.go:177] * Done! kubectl is now configured to use "addons-800266" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 19:08:10 addons-800266 crio[664]: time="2024-10-01 19:08:10.968418062Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809690968391152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572900,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26bab671-0908-41c2-95e3-cd32e6b157aa name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:08:10 addons-800266 crio[664]: time="2024-10-01 19:08:10.969235092Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43271319-5d5a-4f66-80a5-a72e61e31fe5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:08:10 addons-800266 crio[664]: time="2024-10-01 19:08:10.969309056Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43271319-5d5a-4f66-80a5-a72e61e31fe5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:08:10 addons-800266 crio[664]: time="2024-10-01 19:08:10.969618676Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d39ddcb1b5f9aec812d43a7a677a4bdb517e00173cdd5c8a4e9b3e38f24efb67,PodSandboxId:2ab8e5f9df124108b664b6448e5fdb88387e2e454c9759c1dbdca7adce4481ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727809680350390423,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78001029-2e99-4c25-bac6-3c4d1c7efca3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c060f2deb15ed4167efb8db5c219671004ab8d53470f30e0c3d7d653951f0a,PodSandboxId:10fe9643818d6d1f3a7a277d92e6efc4fbc30e5dd21871399dc5e79554e961e3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727809550424447318,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43c83fb0-f623-43ea-bc3c-91da7206fa2c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd535fca92d977b6888e043df8b2f4c9702ef451c7e3be0e11c3a60a130f6872,PodSandboxId:670db599ae7efcfd8425df00aa04736c4cfb627eab68f7774a53c1a2f407c5c5,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727809011398759308,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-8qgj9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 64a369ac-d67b-4aea-8412-f24f6b9c045b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a7b9f467817a57eae31ddf65cdb510d37ee24681bd93cb54411a11980b7df2d0,PodSandboxId:42a964364b0fa0edef5dec00c53eaca9ae0ffeba19ccc0aaf49b31d46c890312,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Stat
e:CONTAINER_EXITED,CreatedAt:1727808990342350558,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hdgw4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b05b84e5-2385-46dc-af4f-4ac2c3759b3e,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd37bbd19b04db1537ed1f9a6bf25360b86a3682e78e4bc984e3fb7565e00e16,PodSandboxId:6172e12e928deaf4b37b4f55f75ca889ff7e06feee1000501470eb46277c2dc8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52
b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727808989956436445,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cms54,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2734038e-dc95-42f0-a646-142b38fd115b,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850bea5323c259645b8f0e337f2d7756596d06da4dfe13ba3f7972eaca837ff0,PodSandboxId:9e6e8e9034d4e5aaa218d7d1d9c3bc0dbc125129322f313ae43c82560fd4203b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727808968959259879,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7mp6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f319c15f-c9b0-400d-89b5-d388e9a49218,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a82c29ad75f23f4b4cd2960a89acd1c4f12ad466c208912384f3d3ebd023a2,PodSandboxId:44d67d7069dd1b90838b80e2d46749958e44ccd416c5aac4338a4b1d1431d33b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@s
ha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727808963776460160,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c841f466-ff18-4ddc-8a0c-d01d392f05e4,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26504377c61b094c87f9c57dc6547187209d3397c940e127450701ed086d4170,PodSandboxId:e0fe8e6e2e03c67898468faddcb544c439feea381e7e5c4b053c35f24a6
2ba1d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727808937782393443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03188f24-2d63-42be-9351-a533a36261f1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588e2b860d1061d153b3e800d62e0681a5e7a74baada9e285edd8def6802801a,PodSandboxId:c94c530e7579b7788bbfa881f4333ef9b1b4e7a763807af4db7e658277e898f0,Metada
ta:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727808932951927591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h656l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf425bf-e9a1-4f2b-98e3-38dc3f94625d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod
.terminationGracePeriod: 30,},},&Container{Id:7e9081f37c3fb61a5375d14bdb14a2c60017c1e8a63c43d60390321737cd070b,PodSandboxId:3383cc1410018df23bcb5aae6c0d4f0e26f5fb5ad129a65f48d09f587b7824d2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727808932050539867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9xtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d9fe6d-e5fb-43c9-b8d3-8075cec0186a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&
Container{Id:f2e78592209ec593f29721f4652b4edbdb5574f343b6c6d59e5bc1b4ec8ddb5e,PodSandboxId:affc68d28d7cc11dc7b2fdd3f98016b29c1b381ed0b0e67c0baf603398373f07,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727808921169082687,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e31f2ac286141c3c6cb5bc1d1fd9d8d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f69e4bbfce257ab72b58b6725f7ad1549
dbfb02a122f66601536180d27ad34a,PodSandboxId:41c2b5c7ce0b8532e9454993a076bc07bb256a0e21c900aea5d34c63ab149409,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727808921152193749,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88c118d760841c2b05582d2c66532469,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42a255e28d30da9b9b571370bb7f475734b4e820e95dcfc7e0
8e3366164272b,PodSandboxId:c3f4e55c0d2f6b266914b9bde04ea61c23e795b7087c92230f699f4f7dd675c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727808921090475270,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51328b0912537964eeb48bb5e91ec731,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868f38fe5a25463a1bdbe6eaafc9fdd61fcd07ad2
bcc794f9562d0b8dd1b2c67,PodSandboxId:5cd1a5dc883d0f27ba6f9dcdbe48ab65759faa0e4b09187ecc0d83cd2064c461,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727808921071073904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0368aed84826471dbccaebb4039370c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43271319-5d5a-4f66-80a5
-a72e61e31fe5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:08:11 addons-800266 crio[664]: time="2024-10-01 19:08:11.013214611Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dbd7fca2-f9f4-43a1-9e4e-e267d524c216 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:08:11 addons-800266 crio[664]: time="2024-10-01 19:08:11.013292218Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dbd7fca2-f9f4-43a1-9e4e-e267d524c216 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:08:11 addons-800266 crio[664]: time="2024-10-01 19:08:11.014431228Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1cb8f37-2716-491c-83e6-1d0990288dd3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:08:11 addons-800266 crio[664]: time="2024-10-01 19:08:11.015545220Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809691015520343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572900,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1cb8f37-2716-491c-83e6-1d0990288dd3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:08:11 addons-800266 crio[664]: time="2024-10-01 19:08:11.016258265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=328bc4e0-e747-42b9-ad52-f4ae66154413 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:08:11 addons-800266 crio[664]: time="2024-10-01 19:08:11.016309964Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=328bc4e0-e747-42b9-ad52-f4ae66154413 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:08:11 addons-800266 crio[664]: time="2024-10-01 19:08:11.018397646Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d39ddcb1b5f9aec812d43a7a677a4bdb517e00173cdd5c8a4e9b3e38f24efb67,PodSandboxId:2ab8e5f9df124108b664b6448e5fdb88387e2e454c9759c1dbdca7adce4481ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727809680350390423,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78001029-2e99-4c25-bac6-3c4d1c7efca3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c060f2deb15ed4167efb8db5c219671004ab8d53470f30e0c3d7d653951f0a,PodSandboxId:10fe9643818d6d1f3a7a277d92e6efc4fbc30e5dd21871399dc5e79554e961e3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727809550424447318,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43c83fb0-f623-43ea-bc3c-91da7206fa2c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd535fca92d977b6888e043df8b2f4c9702ef451c7e3be0e11c3a60a130f6872,PodSandboxId:670db599ae7efcfd8425df00aa04736c4cfb627eab68f7774a53c1a2f407c5c5,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727809011398759308,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-8qgj9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 64a369ac-d67b-4aea-8412-f24f6b9c045b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a7b9f467817a57eae31ddf65cdb510d37ee24681bd93cb54411a11980b7df2d0,PodSandboxId:42a964364b0fa0edef5dec00c53eaca9ae0ffeba19ccc0aaf49b31d46c890312,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Stat
e:CONTAINER_EXITED,CreatedAt:1727808990342350558,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hdgw4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b05b84e5-2385-46dc-af4f-4ac2c3759b3e,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd37bbd19b04db1537ed1f9a6bf25360b86a3682e78e4bc984e3fb7565e00e16,PodSandboxId:6172e12e928deaf4b37b4f55f75ca889ff7e06feee1000501470eb46277c2dc8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52
b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727808989956436445,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cms54,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2734038e-dc95-42f0-a646-142b38fd115b,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850bea5323c259645b8f0e337f2d7756596d06da4dfe13ba3f7972eaca837ff0,PodSandboxId:9e6e8e9034d4e5aaa218d7d1d9c3bc0dbc125129322f313ae43c82560fd4203b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727808968959259879,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7mp6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f319c15f-c9b0-400d-89b5-d388e9a49218,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a82c29ad75f23f4b4cd2960a89acd1c4f12ad466c208912384f3d3ebd023a2,PodSandboxId:44d67d7069dd1b90838b80e2d46749958e44ccd416c5aac4338a4b1d1431d33b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@s
ha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727808963776460160,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c841f466-ff18-4ddc-8a0c-d01d392f05e4,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26504377c61b094c87f9c57dc6547187209d3397c940e127450701ed086d4170,PodSandboxId:e0fe8e6e2e03c67898468faddcb544c439feea381e7e5c4b053c35f24a6
2ba1d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727808937782393443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03188f24-2d63-42be-9351-a533a36261f1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588e2b860d1061d153b3e800d62e0681a5e7a74baada9e285edd8def6802801a,PodSandboxId:c94c530e7579b7788bbfa881f4333ef9b1b4e7a763807af4db7e658277e898f0,Metada
ta:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727808932951927591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h656l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf425bf-e9a1-4f2b-98e3-38dc3f94625d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod
.terminationGracePeriod: 30,},},&Container{Id:7e9081f37c3fb61a5375d14bdb14a2c60017c1e8a63c43d60390321737cd070b,PodSandboxId:3383cc1410018df23bcb5aae6c0d4f0e26f5fb5ad129a65f48d09f587b7824d2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727808932050539867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9xtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d9fe6d-e5fb-43c9-b8d3-8075cec0186a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&
Container{Id:f2e78592209ec593f29721f4652b4edbdb5574f343b6c6d59e5bc1b4ec8ddb5e,PodSandboxId:affc68d28d7cc11dc7b2fdd3f98016b29c1b381ed0b0e67c0baf603398373f07,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727808921169082687,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e31f2ac286141c3c6cb5bc1d1fd9d8d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f69e4bbfce257ab72b58b6725f7ad1549
dbfb02a122f66601536180d27ad34a,PodSandboxId:41c2b5c7ce0b8532e9454993a076bc07bb256a0e21c900aea5d34c63ab149409,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727808921152193749,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88c118d760841c2b05582d2c66532469,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42a255e28d30da9b9b571370bb7f475734b4e820e95dcfc7e0
8e3366164272b,PodSandboxId:c3f4e55c0d2f6b266914b9bde04ea61c23e795b7087c92230f699f4f7dd675c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727808921090475270,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51328b0912537964eeb48bb5e91ec731,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868f38fe5a25463a1bdbe6eaafc9fdd61fcd07ad2
bcc794f9562d0b8dd1b2c67,PodSandboxId:5cd1a5dc883d0f27ba6f9dcdbe48ab65759faa0e4b09187ecc0d83cd2064c461,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727808921071073904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0368aed84826471dbccaebb4039370c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=328bc4e0-e747-42b9-ad52
-f4ae66154413 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:08:11 addons-800266 crio[664]: time="2024-10-01 19:08:11.060018817Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=807e1272-886c-4735-8618-da90d73a9334 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:08:11 addons-800266 crio[664]: time="2024-10-01 19:08:11.060093751Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=807e1272-886c-4735-8618-da90d73a9334 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:08:11 addons-800266 crio[664]: time="2024-10-01 19:08:11.061291941Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2900b92f-dff5-43cc-b636-80ba4bdf2e3b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:08:11 addons-800266 crio[664]: time="2024-10-01 19:08:11.062435958Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809691062408591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572900,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2900b92f-dff5-43cc-b636-80ba4bdf2e3b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:08:11 addons-800266 crio[664]: time="2024-10-01 19:08:11.063343855Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=794b6b6e-2d16-4822-a994-90e0154bb114 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:08:11 addons-800266 crio[664]: time="2024-10-01 19:08:11.063414167Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=794b6b6e-2d16-4822-a994-90e0154bb114 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:08:11 addons-800266 crio[664]: time="2024-10-01 19:08:11.063754829Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d39ddcb1b5f9aec812d43a7a677a4bdb517e00173cdd5c8a4e9b3e38f24efb67,PodSandboxId:2ab8e5f9df124108b664b6448e5fdb88387e2e454c9759c1dbdca7adce4481ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727809680350390423,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78001029-2e99-4c25-bac6-3c4d1c7efca3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c060f2deb15ed4167efb8db5c219671004ab8d53470f30e0c3d7d653951f0a,PodSandboxId:10fe9643818d6d1f3a7a277d92e6efc4fbc30e5dd21871399dc5e79554e961e3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727809550424447318,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43c83fb0-f623-43ea-bc3c-91da7206fa2c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd535fca92d977b6888e043df8b2f4c9702ef451c7e3be0e11c3a60a130f6872,PodSandboxId:670db599ae7efcfd8425df00aa04736c4cfb627eab68f7774a53c1a2f407c5c5,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727809011398759308,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-8qgj9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 64a369ac-d67b-4aea-8412-f24f6b9c045b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a7b9f467817a57eae31ddf65cdb510d37ee24681bd93cb54411a11980b7df2d0,PodSandboxId:42a964364b0fa0edef5dec00c53eaca9ae0ffeba19ccc0aaf49b31d46c890312,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Stat
e:CONTAINER_EXITED,CreatedAt:1727808990342350558,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hdgw4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b05b84e5-2385-46dc-af4f-4ac2c3759b3e,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd37bbd19b04db1537ed1f9a6bf25360b86a3682e78e4bc984e3fb7565e00e16,PodSandboxId:6172e12e928deaf4b37b4f55f75ca889ff7e06feee1000501470eb46277c2dc8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52
b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727808989956436445,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cms54,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2734038e-dc95-42f0-a646-142b38fd115b,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850bea5323c259645b8f0e337f2d7756596d06da4dfe13ba3f7972eaca837ff0,PodSandboxId:9e6e8e9034d4e5aaa218d7d1d9c3bc0dbc125129322f313ae43c82560fd4203b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727808968959259879,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7mp6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f319c15f-c9b0-400d-89b5-d388e9a49218,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a82c29ad75f23f4b4cd2960a89acd1c4f12ad466c208912384f3d3ebd023a2,PodSandboxId:44d67d7069dd1b90838b80e2d46749958e44ccd416c5aac4338a4b1d1431d33b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@s
ha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727808963776460160,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c841f466-ff18-4ddc-8a0c-d01d392f05e4,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26504377c61b094c87f9c57dc6547187209d3397c940e127450701ed086d4170,PodSandboxId:e0fe8e6e2e03c67898468faddcb544c439feea381e7e5c4b053c35f24a6
2ba1d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727808937782393443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03188f24-2d63-42be-9351-a533a36261f1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588e2b860d1061d153b3e800d62e0681a5e7a74baada9e285edd8def6802801a,PodSandboxId:c94c530e7579b7788bbfa881f4333ef9b1b4e7a763807af4db7e658277e898f0,Metada
ta:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727808932951927591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h656l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf425bf-e9a1-4f2b-98e3-38dc3f94625d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod
.terminationGracePeriod: 30,},},&Container{Id:7e9081f37c3fb61a5375d14bdb14a2c60017c1e8a63c43d60390321737cd070b,PodSandboxId:3383cc1410018df23bcb5aae6c0d4f0e26f5fb5ad129a65f48d09f587b7824d2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727808932050539867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9xtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d9fe6d-e5fb-43c9-b8d3-8075cec0186a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&
Container{Id:f2e78592209ec593f29721f4652b4edbdb5574f343b6c6d59e5bc1b4ec8ddb5e,PodSandboxId:affc68d28d7cc11dc7b2fdd3f98016b29c1b381ed0b0e67c0baf603398373f07,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727808921169082687,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e31f2ac286141c3c6cb5bc1d1fd9d8d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f69e4bbfce257ab72b58b6725f7ad1549
dbfb02a122f66601536180d27ad34a,PodSandboxId:41c2b5c7ce0b8532e9454993a076bc07bb256a0e21c900aea5d34c63ab149409,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727808921152193749,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88c118d760841c2b05582d2c66532469,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42a255e28d30da9b9b571370bb7f475734b4e820e95dcfc7e0
8e3366164272b,PodSandboxId:c3f4e55c0d2f6b266914b9bde04ea61c23e795b7087c92230f699f4f7dd675c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727808921090475270,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51328b0912537964eeb48bb5e91ec731,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868f38fe5a25463a1bdbe6eaafc9fdd61fcd07ad2
bcc794f9562d0b8dd1b2c67,PodSandboxId:5cd1a5dc883d0f27ba6f9dcdbe48ab65759faa0e4b09187ecc0d83cd2064c461,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727808921071073904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0368aed84826471dbccaebb4039370c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=794b6b6e-2d16-4822-a994
-90e0154bb114 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:08:11 addons-800266 crio[664]: time="2024-10-01 19:08:11.097996302Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d7d99a7-d708-449a-9442-ae22fc62af59 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:08:11 addons-800266 crio[664]: time="2024-10-01 19:08:11.098086273Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d7d99a7-d708-449a-9442-ae22fc62af59 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:08:11 addons-800266 crio[664]: time="2024-10-01 19:08:11.100286835Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ba05a985-f96a-444f-9e6e-2f214a349423 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:08:11 addons-800266 crio[664]: time="2024-10-01 19:08:11.101428292Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809691101395808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572900,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba05a985-f96a-444f-9e6e-2f214a349423 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:08:11 addons-800266 crio[664]: time="2024-10-01 19:08:11.101995846Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=499322a0-557b-4345-bc79-f8740d11a54e name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:08:11 addons-800266 crio[664]: time="2024-10-01 19:08:11.102063085Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=499322a0-557b-4345-bc79-f8740d11a54e name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:08:11 addons-800266 crio[664]: time="2024-10-01 19:08:11.102351316Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d39ddcb1b5f9aec812d43a7a677a4bdb517e00173cdd5c8a4e9b3e38f24efb67,PodSandboxId:2ab8e5f9df124108b664b6448e5fdb88387e2e454c9759c1dbdca7adce4481ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727809680350390423,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78001029-2e99-4c25-bac6-3c4d1c7efca3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c060f2deb15ed4167efb8db5c219671004ab8d53470f30e0c3d7d653951f0a,PodSandboxId:10fe9643818d6d1f3a7a277d92e6efc4fbc30e5dd21871399dc5e79554e961e3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727809550424447318,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43c83fb0-f623-43ea-bc3c-91da7206fa2c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd535fca92d977b6888e043df8b2f4c9702ef451c7e3be0e11c3a60a130f6872,PodSandboxId:670db599ae7efcfd8425df00aa04736c4cfb627eab68f7774a53c1a2f407c5c5,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727809011398759308,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-8qgj9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 64a369ac-d67b-4aea-8412-f24f6b9c045b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a7b9f467817a57eae31ddf65cdb510d37ee24681bd93cb54411a11980b7df2d0,PodSandboxId:42a964364b0fa0edef5dec00c53eaca9ae0ffeba19ccc0aaf49b31d46c890312,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Stat
e:CONTAINER_EXITED,CreatedAt:1727808990342350558,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hdgw4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b05b84e5-2385-46dc-af4f-4ac2c3759b3e,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd37bbd19b04db1537ed1f9a6bf25360b86a3682e78e4bc984e3fb7565e00e16,PodSandboxId:6172e12e928deaf4b37b4f55f75ca889ff7e06feee1000501470eb46277c2dc8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52
b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727808989956436445,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cms54,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2734038e-dc95-42f0-a646-142b38fd115b,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850bea5323c259645b8f0e337f2d7756596d06da4dfe13ba3f7972eaca837ff0,PodSandboxId:9e6e8e9034d4e5aaa218d7d1d9c3bc0dbc125129322f313ae43c82560fd4203b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727808968959259879,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7mp6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f319c15f-c9b0-400d-89b5-d388e9a49218,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a82c29ad75f23f4b4cd2960a89acd1c4f12ad466c208912384f3d3ebd023a2,PodSandboxId:44d67d7069dd1b90838b80e2d46749958e44ccd416c5aac4338a4b1d1431d33b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@s
ha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727808963776460160,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c841f466-ff18-4ddc-8a0c-d01d392f05e4,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26504377c61b094c87f9c57dc6547187209d3397c940e127450701ed086d4170,PodSandboxId:e0fe8e6e2e03c67898468faddcb544c439feea381e7e5c4b053c35f24a6
2ba1d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727808937782393443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03188f24-2d63-42be-9351-a533a36261f1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588e2b860d1061d153b3e800d62e0681a5e7a74baada9e285edd8def6802801a,PodSandboxId:c94c530e7579b7788bbfa881f4333ef9b1b4e7a763807af4db7e658277e898f0,Metada
ta:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727808932951927591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h656l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf425bf-e9a1-4f2b-98e3-38dc3f94625d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod
.terminationGracePeriod: 30,},},&Container{Id:7e9081f37c3fb61a5375d14bdb14a2c60017c1e8a63c43d60390321737cd070b,PodSandboxId:3383cc1410018df23bcb5aae6c0d4f0e26f5fb5ad129a65f48d09f587b7824d2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727808932050539867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9xtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d9fe6d-e5fb-43c9-b8d3-8075cec0186a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&
Container{Id:f2e78592209ec593f29721f4652b4edbdb5574f343b6c6d59e5bc1b4ec8ddb5e,PodSandboxId:affc68d28d7cc11dc7b2fdd3f98016b29c1b381ed0b0e67c0baf603398373f07,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727808921169082687,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e31f2ac286141c3c6cb5bc1d1fd9d8d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f69e4bbfce257ab72b58b6725f7ad1549
dbfb02a122f66601536180d27ad34a,PodSandboxId:41c2b5c7ce0b8532e9454993a076bc07bb256a0e21c900aea5d34c63ab149409,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727808921152193749,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88c118d760841c2b05582d2c66532469,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42a255e28d30da9b9b571370bb7f475734b4e820e95dcfc7e0
8e3366164272b,PodSandboxId:c3f4e55c0d2f6b266914b9bde04ea61c23e795b7087c92230f699f4f7dd675c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727808921090475270,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51328b0912537964eeb48bb5e91ec731,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868f38fe5a25463a1bdbe6eaafc9fdd61fcd07ad2
bcc794f9562d0b8dd1b2c67,PodSandboxId:5cd1a5dc883d0f27ba6f9dcdbe48ab65759faa0e4b09187ecc0d83cd2064c461,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727808921071073904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0368aed84826471dbccaebb4039370c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=499322a0-557b-4345-bc79
-f8740d11a54e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d39ddcb1b5f9a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          10 seconds ago      Running             busybox                   0                   2ab8e5f9df124       busybox
	f2c060f2deb15       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              2 minutes ago       Running             nginx                     0                   10fe9643818d6       nginx
	fd535fca92d97       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             11 minutes ago      Running             controller                0                   670db599ae7ef       ingress-nginx-controller-bc57996ff-8qgj9
	a7b9f467817a5       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             11 minutes ago      Exited              patch                     1                   42a964364b0fa       ingress-nginx-admission-patch-hdgw4
	dd37bbd19b04d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   11 minutes ago      Exited              create                    0                   6172e12e928de       ingress-nginx-admission-create-cms54
	850bea5323c25       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        12 minutes ago      Running             metrics-server            0                   9e6e8e9034d4e       metrics-server-84c5f94fbc-7mp6j
	f4a82c29ad75f       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             12 minutes ago      Running             minikube-ingress-dns      0                   44d67d7069dd1       kube-ingress-dns-minikube
	26504377c61b0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             12 minutes ago      Running             storage-provisioner       0                   e0fe8e6e2e03c       storage-provisioner
	588e2b860d106       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             12 minutes ago      Running             coredns                   0                   c94c530e7579b       coredns-7c65d6cfc9-h656l
	7e9081f37c3fb       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             12 minutes ago      Running             kube-proxy                0                   3383cc1410018       kube-proxy-x9xtt
	f2e78592209ec       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             12 minutes ago      Running             etcd                      0                   affc68d28d7cc       etcd-addons-800266
	1f69e4bbfce25       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             12 minutes ago      Running             kube-scheduler            0                   41c2b5c7ce0b8       kube-scheduler-addons-800266
	f42a255e28d30       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             12 minutes ago      Running             kube-controller-manager   0                   c3f4e55c0d2f6       kube-controller-manager-addons-800266
	868f38fe5a254       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             12 minutes ago      Running             kube-apiserver            0                   5cd1a5dc883d0       kube-apiserver-addons-800266
	
	
	==> coredns [588e2b860d1061d153b3e800d62e0681a5e7a74baada9e285edd8def6802801a] <==
	[INFO] 10.244.0.7:45961 - 57803 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.000112037s
	[INFO] 10.244.0.7:45961 - 44922 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000085644s
	[INFO] 10.244.0.7:45961 - 50643 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000058445s
	[INFO] 10.244.0.7:45961 - 43086 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000222022s
	[INFO] 10.244.0.7:45961 - 38824 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000631882s
	[INFO] 10.244.0.7:45961 - 50099 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000124041s
	[INFO] 10.244.0.7:45961 - 10027 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00007539s
	[INFO] 10.244.0.7:40326 - 30121 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000162415s
	[INFO] 10.244.0.7:40326 - 29833 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000049043s
	[INFO] 10.244.0.7:47632 - 9912 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000062655s
	[INFO] 10.244.0.7:47632 - 9477 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000028787s
	[INFO] 10.244.0.7:43123 - 2659 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000054484s
	[INFO] 10.244.0.7:43123 - 2438 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000028592s
	[INFO] 10.244.0.7:42154 - 40156 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000048323s
	[INFO] 10.244.0.7:42154 - 39728 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000049124s
	[INFO] 10.244.0.21:38134 - 13555 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000469901s
	[INFO] 10.244.0.21:32900 - 26381 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000139126s
	[INFO] 10.244.0.21:58737 - 53077 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000152144s
	[INFO] 10.244.0.21:52005 - 59080 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000095374s
	[INFO] 10.244.0.21:33758 - 35492 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088961s
	[INFO] 10.244.0.21:54991 - 61944 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000065645s
	[INFO] 10.244.0.21:34644 - 36950 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001168932s
	[INFO] 10.244.0.21:47701 - 39935 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002084556s
	[INFO] 10.244.0.25:54706 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000568891s
	[INFO] 10.244.0.25:35703 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000159033s
	
	
	==> describe nodes <==
	Name:               addons-800266
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-800266
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=addons-800266
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T18_55_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-800266
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 18:55:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-800266
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:08:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:05:57 +0000   Tue, 01 Oct 2024 18:55:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:05:57 +0000   Tue, 01 Oct 2024 18:55:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:05:57 +0000   Tue, 01 Oct 2024 18:55:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:05:57 +0000   Tue, 01 Oct 2024 18:55:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.56
	  Hostname:    addons-800266
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 e369f6d6d9654b1f858197dec59d1591
	  System UUID:                e369f6d6-d965-4b1f-8581-97dec59d1591
	  Boot ID:                    e7e1b035-60f6-4998-aa54-57f01ff745eb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-46nkk            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-8qgj9    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-h656l                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-800266                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-800266                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-800266       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-x9xtt                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-800266                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-7mp6j             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node addons-800266 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node addons-800266 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node addons-800266 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m   kubelet          Node addons-800266 status is now: NodeReady
	  Normal  RegisteredNode           12m   node-controller  Node addons-800266 event: Registered Node addons-800266 in Controller
	
	
	==> dmesg <==
	[  +0.081747] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.588643] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.358343] systemd-fstab-generator[1496]: Ignoring "noauto" option for root device
	[  +4.644593] kauditd_printk_skb: 113 callbacks suppressed
	[  +5.069667] kauditd_printk_skb: 148 callbacks suppressed
	[  +7.554187] kauditd_printk_skb: 53 callbacks suppressed
	[Oct 1 18:56] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.717659] kauditd_printk_skb: 29 callbacks suppressed
	[ +11.678742] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.923509] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.214391] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.927317] kauditd_printk_skb: 15 callbacks suppressed
	[  +8.139816] kauditd_printk_skb: 6 callbacks suppressed
	[Oct 1 18:57] kauditd_printk_skb: 6 callbacks suppressed
	[Oct 1 19:05] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.021279] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.610237] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.377026] kauditd_printk_skb: 39 callbacks suppressed
	[  +6.050567] kauditd_printk_skb: 49 callbacks suppressed
	[  +6.215791] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.048791] kauditd_printk_skb: 9 callbacks suppressed
	[  +9.611037] kauditd_printk_skb: 18 callbacks suppressed
	[Oct 1 19:06] kauditd_printk_skb: 15 callbacks suppressed
	[ +18.938698] kauditd_printk_skb: 49 callbacks suppressed
	[Oct 1 19:07] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [f2e78592209ec593f29721f4652b4edbdb5574f343b6c6d59e5bc1b4ec8ddb5e] <==
	{"level":"warn","ts":"2024-10-01T18:56:50.150039Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T18:56:49.735065Z","time spent":"414.880876ms","remote":"127.0.0.1:60508","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1069 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-10-01T18:56:50.150127Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"287.208252ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T18:56:50.150169Z","caller":"traceutil/trace.go:171","msg":"trace[216095344] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1079; }","duration":"287.256449ms","start":"2024-10-01T18:56:49.862904Z","end":"2024-10-01T18:56:50.150160Z","steps":["trace[216095344] 'agreement among raft nodes before linearized reading'  (duration: 287.194674ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T18:56:50.150296Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.127176ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T18:56:50.150332Z","caller":"traceutil/trace.go:171","msg":"trace[150706041] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1079; }","duration":"136.164037ms","start":"2024-10-01T18:56:50.014162Z","end":"2024-10-01T18:56:50.150326Z","steps":["trace[150706041] 'agreement among raft nodes before linearized reading'  (duration: 136.113254ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T18:56:50.150349Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"275.998928ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-01T18:56:50.150370Z","caller":"traceutil/trace.go:171","msg":"trace[217688362] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1079; }","duration":"276.021502ms","start":"2024-10-01T18:56:49.874341Z","end":"2024-10-01T18:56:50.150363Z","steps":["trace[217688362] 'agreement among raft nodes before linearized reading'  (duration: 275.981089ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T18:56:50.150442Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"356.948245ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T18:56:50.150467Z","caller":"traceutil/trace.go:171","msg":"trace[2015084443] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1079; }","duration":"356.968244ms","start":"2024-10-01T18:56:49.793489Z","end":"2024-10-01T18:56:50.150457Z","steps":["trace[2015084443] 'agreement among raft nodes before linearized reading'  (duration: 356.929735ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T18:56:50.150489Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T18:56:49.793456Z","time spent":"357.027743ms","remote":"127.0.0.1:60426","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-01T18:56:55.772413Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.764463ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T18:56:55.772659Z","caller":"traceutil/trace.go:171","msg":"trace[191106317] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1099; }","duration":"216.022989ms","start":"2024-10-01T18:56:55.556616Z","end":"2024-10-01T18:56:55.772639Z","steps":["trace[191106317] 'range keys from in-memory index tree'  (duration: 215.720422ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T19:05:15.650658Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T19:05:15.286030Z","time spent":"364.616663ms","remote":"127.0.0.1:60278","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-10-01T19:05:22.177193Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1466}
	{"level":"info","ts":"2024-10-01T19:05:22.209956Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1466,"took":"32.253715ms","hash":4096953151,"current-db-size-bytes":6348800,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":3366912,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-10-01T19:05:22.210064Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4096953151,"revision":1466,"compact-revision":-1}
	{"level":"info","ts":"2024-10-01T19:05:47.501580Z","caller":"traceutil/trace.go:171","msg":"trace[217967233] linearizableReadLoop","detail":"{readStateIndex:2360; appliedIndex:2359; }","duration":"130.145136ms","start":"2024-10-01T19:05:47.371407Z","end":"2024-10-01T19:05:47.501552Z","steps":["trace[217967233] 'read index received'  (duration: 130.00794ms)","trace[217967233] 'applied index is now lower than readState.Index'  (duration: 136.393µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-01T19:05:47.501689Z","caller":"traceutil/trace.go:171","msg":"trace[61397452] transaction","detail":"{read_only:false; response_revision:2206; number_of_response:1; }","duration":"182.775156ms","start":"2024-10-01T19:05:47.318904Z","end":"2024-10-01T19:05:47.501679Z","steps":["trace[61397452] 'process raft request'  (duration: 182.527494ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T19:05:47.501878Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.44547ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T19:05:47.501910Z","caller":"traceutil/trace.go:171","msg":"trace[1589703985] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2206; }","duration":"130.519248ms","start":"2024-10-01T19:05:47.371385Z","end":"2024-10-01T19:05:47.501904Z","steps":["trace[1589703985] 'agreement among raft nodes before linearized reading'  (duration: 130.415027ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T19:06:13.366447Z","caller":"traceutil/trace.go:171","msg":"trace[368892018] linearizableReadLoop","detail":"{readStateIndex:2577; appliedIndex:2576; }","duration":"275.996881ms","start":"2024-10-01T19:06:13.090437Z","end":"2024-10-01T19:06:13.366434Z","steps":["trace[368892018] 'read index received'  (duration: 275.842966ms)","trace[368892018] 'applied index is now lower than readState.Index'  (duration: 153.241µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-01T19:06:13.366621Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"276.166839ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/csi-resizer\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T19:06:13.366665Z","caller":"traceutil/trace.go:171","msg":"trace[1039750024] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-resizer; range_end:; response_count:0; response_revision:2413; }","duration":"276.224657ms","start":"2024-10-01T19:06:13.090434Z","end":"2024-10-01T19:06:13.366658Z","steps":["trace[1039750024] 'agreement among raft nodes before linearized reading'  (duration: 276.150291ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T19:06:13.366680Z","caller":"traceutil/trace.go:171","msg":"trace[1223254419] transaction","detail":"{read_only:false; response_revision:2413; number_of_response:1; }","duration":"305.736279ms","start":"2024-10-01T19:06:13.060931Z","end":"2024-10-01T19:06:13.366668Z","steps":["trace[1223254419] 'process raft request'  (duration: 305.400048ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T19:06:13.366861Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T19:06:13.060916Z","time spent":"305.865431ms","remote":"127.0.0.1:60410","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:2406 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 19:08:11 up 13 min,  0 users,  load average: 0.25, 0.33, 0.30
	Linux addons-800266 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [868f38fe5a25463a1bdbe6eaafc9fdd61fcd07ad2bcc794f9562d0b8dd1b2c67] <==
	E1001 18:57:17.310562       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.79.139:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.79.139:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.79.139:443: connect: connection refused" logger="UnhandledError"
	E1001 18:57:17.317310       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.79.139:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.79.139:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.79.139:443: connect: connection refused" logger="UnhandledError"
	E1001 18:57:17.340366       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.79.139:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.79.139:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.79.139:443: connect: connection refused" logger="UnhandledError"
	I1001 18:57:17.446602       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1001 19:05:10.759972       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.11.198"}
	I1001 19:05:39.912463       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1001 19:05:41.037899       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1001 19:05:45.764131       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1001 19:05:46.072362       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.218.37"}
	E1001 19:05:47.308520       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1001 19:05:54.444889       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1001 19:06:09.063802       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 19:06:09.064008       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 19:06:09.082635       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 19:06:09.082810       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 19:06:09.112033       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 19:06:09.112130       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 19:06:09.122674       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 19:06:09.122797       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 19:06:09.153120       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 19:06:09.153550       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1001 19:06:10.112407       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1001 19:06:10.153605       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1001 19:06:10.265072       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1001 19:08:09.963064       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.125.46"}
	
	
	==> kube-controller-manager [f42a255e28d30da9b9b571370bb7f475734b4e820e95dcfc7e08e3366164272b] <==
	E1001 19:06:44.003057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 19:06:46.441831       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 19:06:46.442816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 19:06:55.746801       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 19:06:55.746916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 19:06:56.181530       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 19:06:56.181654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 19:07:28.037669       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 19:07:28.037832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 19:07:29.449329       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 19:07:29.449483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 19:07:32.860703       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 19:07:32.860851       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 19:07:34.474155       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 19:07:34.474271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 19:08:01.505810       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 19:08:01.505860       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 19:08:09.713835       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 19:08:09.713986       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1001 19:08:09.768650       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="42.652992ms"
	I1001 19:08:09.793416       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="24.576188ms"
	I1001 19:08:09.803100       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.571425ms"
	I1001 19:08:09.803259       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="77.402µs"
	W1001 19:08:10.729213       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 19:08:10.729248       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [7e9081f37c3fb61a5375d14bdb14a2c60017c1e8a63c43d60390321737cd070b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 18:55:32.753592       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 18:55:32.764841       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.56"]
	E1001 18:55:32.764945       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 18:55:32.883864       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 18:55:32.883937       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 18:55:32.883970       1 server_linux.go:169] "Using iptables Proxier"
	I1001 18:55:32.886953       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 18:55:32.887235       1 server.go:483] "Version info" version="v1.31.1"
	I1001 18:55:32.887246       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 18:55:32.888643       1 config.go:199] "Starting service config controller"
	I1001 18:55:32.888665       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 18:55:32.888747       1 config.go:105] "Starting endpoint slice config controller"
	I1001 18:55:32.888765       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 18:55:32.889264       1 config.go:328] "Starting node config controller"
	I1001 18:55:32.889285       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 18:55:32.988968       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 18:55:32.989039       1 shared_informer.go:320] Caches are synced for service config
	I1001 18:55:32.990803       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1f69e4bbfce257ab72b58b6725f7ad1549dbfb02a122f66601536180d27ad34a] <==
	W1001 18:55:23.554557       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 18:55:23.554597       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1001 18:55:23.555016       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1001 18:55:23.555076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 18:55:23.555326       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 18:55:23.555355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 18:55:23.555431       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1001 18:55:23.555462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 18:55:24.361365       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 18:55:24.361410       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1001 18:55:24.377658       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1001 18:55:24.377812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 18:55:24.466113       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1001 18:55:24.466234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 18:55:24.481156       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1001 18:55:24.481267       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 18:55:24.490703       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1001 18:55:24.490779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 18:55:24.546375       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1001 18:55:24.546563       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 18:55:24.858427       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1001 18:55:24.858904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 18:55:24.919940       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1001 18:55:24.919984       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1001 18:55:27.931251       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 01 19:08:09 addons-800266 kubelet[1206]: E1001 19:08:09.765113    1206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="22221d1d-2188-4e3c-a522-e2b0dd98aa60" containerName="hostpath"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: E1001 19:08:09.765120    1206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="22221d1d-2188-4e3c-a522-e2b0dd98aa60" containerName="csi-external-health-monitor-controller"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: E1001 19:08:09.765128    1206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="22221d1d-2188-4e3c-a522-e2b0dd98aa60" containerName="liveness-probe"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: E1001 19:08:09.765136    1206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="78339872-e21b-4348-9374-e13f9b6d4884" containerName="volume-snapshot-controller"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: E1001 19:08:09.765144    1206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4448db04-0896-4ccc-a4ea-eeaa1f1670a1" containerName="volume-snapshot-controller"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: E1001 19:08:09.765150    1206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="22221d1d-2188-4e3c-a522-e2b0dd98aa60" containerName="csi-provisioner"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: E1001 19:08:09.765159    1206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56f788c0-c09f-459b-8f37-4bc5cbc483ee" containerName="csi-resizer"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: E1001 19:08:09.765166    1206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="22221d1d-2188-4e3c-a522-e2b0dd98aa60" containerName="csi-snapshotter"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: E1001 19:08:09.765175    1206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="575646ba-0714-43ff-84db-64681a170979" containerName="task-pv-container"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: E1001 19:08:09.765183    1206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d91f6f6-2903-4708-9bc4-03e03fffa147" containerName="yakd"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: E1001 19:08:09.765189    1206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a3746e4-0f9e-4707-8c0f-a2102389ae24" containerName="csi-attacher"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: I1001 19:08:09.765224    1206 memory_manager.go:354] "RemoveStaleState removing state" podUID="575646ba-0714-43ff-84db-64681a170979" containerName="task-pv-container"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: I1001 19:08:09.765231    1206 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d91f6f6-2903-4708-9bc4-03e03fffa147" containerName="yakd"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: I1001 19:08:09.765235    1206 memory_manager.go:354] "RemoveStaleState removing state" podUID="4448db04-0896-4ccc-a4ea-eeaa1f1670a1" containerName="volume-snapshot-controller"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: I1001 19:08:09.765240    1206 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f788c0-c09f-459b-8f37-4bc5cbc483ee" containerName="csi-resizer"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: I1001 19:08:09.765245    1206 memory_manager.go:354] "RemoveStaleState removing state" podUID="22221d1d-2188-4e3c-a522-e2b0dd98aa60" containerName="csi-snapshotter"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: I1001 19:08:09.765249    1206 memory_manager.go:354] "RemoveStaleState removing state" podUID="22221d1d-2188-4e3c-a522-e2b0dd98aa60" containerName="node-driver-registrar"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: I1001 19:08:09.765254    1206 memory_manager.go:354] "RemoveStaleState removing state" podUID="22221d1d-2188-4e3c-a522-e2b0dd98aa60" containerName="hostpath"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: I1001 19:08:09.765260    1206 memory_manager.go:354] "RemoveStaleState removing state" podUID="78339872-e21b-4348-9374-e13f9b6d4884" containerName="volume-snapshot-controller"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: I1001 19:08:09.765265    1206 memory_manager.go:354] "RemoveStaleState removing state" podUID="22221d1d-2188-4e3c-a522-e2b0dd98aa60" containerName="csi-external-health-monitor-controller"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: I1001 19:08:09.765269    1206 memory_manager.go:354] "RemoveStaleState removing state" podUID="22221d1d-2188-4e3c-a522-e2b0dd98aa60" containerName="liveness-probe"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: I1001 19:08:09.765293    1206 memory_manager.go:354] "RemoveStaleState removing state" podUID="95262676-0b99-4f74-b1fd-cf170444b0f1" containerName="local-path-provisioner"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: I1001 19:08:09.765303    1206 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a3746e4-0f9e-4707-8c0f-a2102389ae24" containerName="csi-attacher"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: I1001 19:08:09.765309    1206 memory_manager.go:354] "RemoveStaleState removing state" podUID="22221d1d-2188-4e3c-a522-e2b0dd98aa60" containerName="csi-provisioner"
	Oct 01 19:08:09 addons-800266 kubelet[1206]: I1001 19:08:09.879992    1206 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x546b\" (UniqueName: \"kubernetes.io/projected/4346a434-1efe-4fcc-aadf-751f61d32b31-kube-api-access-x546b\") pod \"hello-world-app-55bf9c44b4-46nkk\" (UID: \"4346a434-1efe-4fcc-aadf-751f61d32b31\") " pod="default/hello-world-app-55bf9c44b4-46nkk"
	
	
	==> storage-provisioner [26504377c61b094c87f9c57dc6547187209d3397c940e127450701ed086d4170] <==
	I1001 18:55:38.925001       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 18:55:39.164951       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 18:55:39.189198       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 18:55:39.234001       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 18:55:39.234143       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-800266_c0ffc0e5-d926-445b-9d38-54d07d6e5c0b!
	I1001 18:55:39.246031       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5ebfcb6f-6d49-4e2c-894f-b9d92e850914", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-800266_c0ffc0e5-d926-445b-9d38-54d07d6e5c0b became leader
	I1001 18:55:39.435277       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-800266_c0ffc0e5-d926-445b-9d38-54d07d6e5c0b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-800266 -n addons-800266
helpers_test.go:261: (dbg) Run:  kubectl --context addons-800266 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-46nkk ingress-nginx-admission-create-cms54 ingress-nginx-admission-patch-hdgw4
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-800266 describe pod hello-world-app-55bf9c44b4-46nkk ingress-nginx-admission-create-cms54 ingress-nginx-admission-patch-hdgw4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-800266 describe pod hello-world-app-55bf9c44b4-46nkk ingress-nginx-admission-create-cms54 ingress-nginx-admission-patch-hdgw4: exit status 1 (70.335631ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-46nkk
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-800266/192.168.39.56
	Start Time:       Tue, 01 Oct 2024 19:08:09 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x546b (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-x546b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-46nkk to addons-800266
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-cms54" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-hdgw4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-800266 describe pod hello-world-app-55bf9c44b4-46nkk ingress-nginx-admission-create-cms54 ingress-nginx-admission-patch-hdgw4: exit status 1
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-800266 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-amd64 -p addons-800266 addons disable ingress-dns --alsologtostderr -v=1: (1.636115345s)
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-800266 addons disable ingress --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-amd64 -p addons-800266 addons disable ingress --alsologtostderr -v=1: (7.695059764s)
--- FAIL: TestAddons/parallel/Ingress (156.09s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (315.42s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 5.322559ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-7mp6j" [f319c15f-c9b0-400d-89b5-d388e9a49218] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00362309s
addons_test.go:402: (dbg) Run:  kubectl --context addons-800266 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-800266 top pods -n kube-system: exit status 1 (88.616141ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h656l, age: 9m45.090540955s

                                                
                                                
** /stderr **
I1001 19:05:16.092630   18430 retry.go:31] will retry after 2.922866726s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-800266 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-800266 top pods -n kube-system: exit status 1 (67.42084ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h656l, age: 9m48.081952643s

                                                
                                                
** /stderr **
I1001 19:05:19.084036   18430 retry.go:31] will retry after 5.229200292s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-800266 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-800266 top pods -n kube-system: exit status 1 (70.127797ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h656l, age: 9m53.38295709s

                                                
                                                
** /stderr **
I1001 19:05:24.384632   18430 retry.go:31] will retry after 4.842447255s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-800266 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-800266 top pods -n kube-system: exit status 1 (74.799905ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h656l, age: 9m58.300924993s

                                                
                                                
** /stderr **
I1001 19:05:29.302796   18430 retry.go:31] will retry after 7.515798073s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-800266 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-800266 top pods -n kube-system: exit status 1 (69.588267ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h656l, age: 10m5.885976414s

                                                
                                                
** /stderr **
I1001 19:05:36.888511   18430 retry.go:31] will retry after 21.899963761s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-800266 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-800266 top pods -n kube-system: exit status 1 (73.604285ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h656l, age: 10m27.860224716s

                                                
                                                
** /stderr **
I1001 19:05:58.862659   18430 retry.go:31] will retry after 16.258206413s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-800266 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-800266 top pods -n kube-system: exit status 1 (64.404964ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h656l, age: 10m44.183570773s

                                                
                                                
** /stderr **
I1001 19:06:15.185541   18430 retry.go:31] will retry after 35.92425835s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-800266 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-800266 top pods -n kube-system: exit status 1 (67.774899ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h656l, age: 11m20.176114732s

                                                
                                                
** /stderr **
I1001 19:06:51.178120   18430 retry.go:31] will retry after 28.938174043s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-800266 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-800266 top pods -n kube-system: exit status 1 (63.449382ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h656l, age: 11m49.178772309s

                                                
                                                
** /stderr **
I1001 19:07:20.180707   18430 retry.go:31] will retry after 1m7.709991601s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-800266 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-800266 top pods -n kube-system: exit status 1 (63.159841ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h656l, age: 12m56.957566194s

                                                
                                                
** /stderr **
I1001 19:08:27.959444   18430 retry.go:31] will retry after 33.165500562s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-800266 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-800266 top pods -n kube-system: exit status 1 (66.276918ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h656l, age: 13m30.190386429s

                                                
                                                
** /stderr **
I1001 19:09:01.192508   18430 retry.go:31] will retry after 1m21.47788655s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-800266 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-800266 top pods -n kube-system: exit status 1 (65.995069ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h656l, age: 14m51.737828508s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-800266 -n addons-800266
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-800266 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-800266 logs -n 25: (1.28718287s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-333407                                                                     | download-only-333407 | jenkins | v1.34.0 | 01 Oct 24 18:54 UTC | 01 Oct 24 18:54 UTC |
	| delete  | -p download-only-195954                                                                     | download-only-195954 | jenkins | v1.34.0 | 01 Oct 24 18:54 UTC | 01 Oct 24 18:54 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-213993 | jenkins | v1.34.0 | 01 Oct 24 18:54 UTC |                     |
	|         | binary-mirror-213993                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46019                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-213993                                                                     | binary-mirror-213993 | jenkins | v1.34.0 | 01 Oct 24 18:54 UTC | 01 Oct 24 18:54 UTC |
	| addons  | enable dashboard -p                                                                         | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 18:54 UTC |                     |
	|         | addons-800266                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 18:54 UTC |                     |
	|         | addons-800266                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-800266 --wait=true                                                                | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 18:54 UTC | 01 Oct 24 18:56 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-800266 addons disable                                                                | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 18:56 UTC | 01 Oct 24 18:56 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-800266 addons disable                                                                | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:05 UTC | 01 Oct 24 19:05 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:05 UTC | 01 Oct 24 19:05 UTC |
	|         | -p addons-800266                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:05 UTC | 01 Oct 24 19:05 UTC |
	|         | -p addons-800266                                                                            |                      |         |         |                     |                     |
	| addons  | addons-800266 addons disable                                                                | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:05 UTC | 01 Oct 24 19:05 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-800266 ip                                                                            | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:05 UTC | 01 Oct 24 19:05 UTC |
	| addons  | addons-800266 addons disable                                                                | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:05 UTC | 01 Oct 24 19:05 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-800266 ssh cat                                                                       | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:05 UTC | 01 Oct 24 19:05 UTC |
	|         | /opt/local-path-provisioner/pvc-8cdb206c-3008-4806-8f7b-043e61fbf684_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-800266 addons disable                                                                | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:05 UTC | 01 Oct 24 19:06 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-800266 addons                                                                        | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:05 UTC | 01 Oct 24 19:05 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-800266 addons                                                                        | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:05 UTC | 01 Oct 24 19:05 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-800266 ssh curl -s                                                                   | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:05 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-800266 addons                                                                        | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:06 UTC | 01 Oct 24 19:06 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-800266 addons                                                                        | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:06 UTC | 01 Oct 24 19:06 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-800266 addons disable                                                                | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:06 UTC | 01 Oct 24 19:06 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-800266 ip                                                                            | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:08 UTC | 01 Oct 24 19:08 UTC |
	| addons  | addons-800266 addons disable                                                                | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:08 UTC | 01 Oct 24 19:08 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-800266 addons disable                                                                | addons-800266        | jenkins | v1.34.0 | 01 Oct 24 19:08 UTC | 01 Oct 24 19:08 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 18:54:45
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 18:54:45.498233   19130 out.go:345] Setting OutFile to fd 1 ...
	I1001 18:54:45.498361   19130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 18:54:45.498373   19130 out.go:358] Setting ErrFile to fd 2...
	I1001 18:54:45.498380   19130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 18:54:45.498595   19130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 18:54:45.499195   19130 out.go:352] Setting JSON to false
	I1001 18:54:45.499987   19130 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2227,"bootTime":1727806658,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 18:54:45.500077   19130 start.go:139] virtualization: kvm guest
	I1001 18:54:45.501925   19130 out.go:177] * [addons-800266] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 18:54:45.503081   19130 notify.go:220] Checking for updates...
	I1001 18:54:45.503103   19130 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 18:54:45.504220   19130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 18:54:45.505318   19130 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 18:54:45.506383   19130 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 18:54:45.507427   19130 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 18:54:45.508563   19130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 18:54:45.509781   19130 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 18:54:45.542204   19130 out.go:177] * Using the kvm2 driver based on user configuration
	I1001 18:54:45.543033   19130 start.go:297] selected driver: kvm2
	I1001 18:54:45.543048   19130 start.go:901] validating driver "kvm2" against <nil>
	I1001 18:54:45.543059   19130 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 18:54:45.543726   19130 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 18:54:45.543817   19130 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 18:54:45.559273   19130 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 18:54:45.559325   19130 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 18:54:45.559575   19130 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 18:54:45.559604   19130 cni.go:84] Creating CNI manager for ""
	I1001 18:54:45.559640   19130 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 18:54:45.559650   19130 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 18:54:45.559699   19130 start.go:340] cluster config:
	{Name:addons-800266 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-800266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:54:45.559789   19130 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 18:54:45.561303   19130 out.go:177] * Starting "addons-800266" primary control-plane node in "addons-800266" cluster
	I1001 18:54:45.562260   19130 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 18:54:45.562302   19130 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 18:54:45.562315   19130 cache.go:56] Caching tarball of preloaded images
	I1001 18:54:45.562412   19130 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 18:54:45.562426   19130 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 18:54:45.562844   19130 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/config.json ...
	I1001 18:54:45.562870   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/config.json: {Name:mk42ad5268c0ee1c54e04bf3050a8a4716c0fd89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:54:45.563052   19130 start.go:360] acquireMachinesLock for addons-800266: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 18:54:45.563152   19130 start.go:364] duration metric: took 80.6µs to acquireMachinesLock for "addons-800266"
	I1001 18:54:45.563177   19130 start.go:93] Provisioning new machine with config: &{Name:addons-800266 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-800266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 18:54:45.563265   19130 start.go:125] createHost starting for "" (driver="kvm2")
	I1001 18:54:45.564909   19130 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1001 18:54:45.565053   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:54:45.565097   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:54:45.580192   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35307
	I1001 18:54:45.580732   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:54:45.581364   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:54:45.581392   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:54:45.581758   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:54:45.581964   19130 main.go:141] libmachine: (addons-800266) Calling .GetMachineName
	I1001 18:54:45.582155   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:54:45.582322   19130 start.go:159] libmachine.API.Create for "addons-800266" (driver="kvm2")
	I1001 18:54:45.582356   19130 client.go:168] LocalClient.Create starting
	I1001 18:54:45.582394   19130 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem
	I1001 18:54:45.662095   19130 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem
	I1001 18:54:45.765180   19130 main.go:141] libmachine: Running pre-create checks...
	I1001 18:54:45.765204   19130 main.go:141] libmachine: (addons-800266) Calling .PreCreateCheck
	I1001 18:54:45.765686   19130 main.go:141] libmachine: (addons-800266) Calling .GetConfigRaw
	I1001 18:54:45.766122   19130 main.go:141] libmachine: Creating machine...
	I1001 18:54:45.766137   19130 main.go:141] libmachine: (addons-800266) Calling .Create
	I1001 18:54:45.766335   19130 main.go:141] libmachine: (addons-800266) Creating KVM machine...
	I1001 18:54:45.767606   19130 main.go:141] libmachine: (addons-800266) DBG | found existing default KVM network
	I1001 18:54:45.768408   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:45.768211   19152 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I1001 18:54:45.768501   19130 main.go:141] libmachine: (addons-800266) DBG | created network xml: 
	I1001 18:54:45.768525   19130 main.go:141] libmachine: (addons-800266) DBG | <network>
	I1001 18:54:45.768533   19130 main.go:141] libmachine: (addons-800266) DBG |   <name>mk-addons-800266</name>
	I1001 18:54:45.768540   19130 main.go:141] libmachine: (addons-800266) DBG |   <dns enable='no'/>
	I1001 18:54:45.768547   19130 main.go:141] libmachine: (addons-800266) DBG |   
	I1001 18:54:45.768556   19130 main.go:141] libmachine: (addons-800266) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1001 18:54:45.768565   19130 main.go:141] libmachine: (addons-800266) DBG |     <dhcp>
	I1001 18:54:45.768574   19130 main.go:141] libmachine: (addons-800266) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1001 18:54:45.768586   19130 main.go:141] libmachine: (addons-800266) DBG |     </dhcp>
	I1001 18:54:45.768594   19130 main.go:141] libmachine: (addons-800266) DBG |   </ip>
	I1001 18:54:45.768601   19130 main.go:141] libmachine: (addons-800266) DBG |   
	I1001 18:54:45.768610   19130 main.go:141] libmachine: (addons-800266) DBG | </network>
	I1001 18:54:45.768640   19130 main.go:141] libmachine: (addons-800266) DBG | 
	I1001 18:54:45.773904   19130 main.go:141] libmachine: (addons-800266) DBG | trying to create private KVM network mk-addons-800266 192.168.39.0/24...
	I1001 18:54:45.841936   19130 main.go:141] libmachine: (addons-800266) DBG | private KVM network mk-addons-800266 192.168.39.0/24 created
	I1001 18:54:45.841967   19130 main.go:141] libmachine: (addons-800266) Setting up store path in /home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266 ...
	I1001 18:54:45.841985   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:45.841901   19152 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 18:54:45.842003   19130 main.go:141] libmachine: (addons-800266) Building disk image from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 18:54:45.842040   19130 main.go:141] libmachine: (addons-800266) Downloading /home/jenkins/minikube-integration/19736-11198/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 18:54:46.116666   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:46.116527   19152 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa...
	I1001 18:54:46.227591   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:46.227418   19152 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/addons-800266.rawdisk...
	I1001 18:54:46.227635   19130 main.go:141] libmachine: (addons-800266) DBG | Writing magic tar header
	I1001 18:54:46.227646   19130 main.go:141] libmachine: (addons-800266) DBG | Writing SSH key tar header
	I1001 18:54:46.227662   19130 main.go:141] libmachine: (addons-800266) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266 (perms=drwx------)
	I1001 18:54:46.227678   19130 main.go:141] libmachine: (addons-800266) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines (perms=drwxr-xr-x)
	I1001 18:54:46.227685   19130 main.go:141] libmachine: (addons-800266) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube (perms=drwxr-xr-x)
	I1001 18:54:46.227695   19130 main.go:141] libmachine: (addons-800266) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198 (perms=drwxrwxr-x)
	I1001 18:54:46.227705   19130 main.go:141] libmachine: (addons-800266) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 18:54:46.227720   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:46.227537   19152 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266 ...
	I1001 18:54:46.227730   19130 main.go:141] libmachine: (addons-800266) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 18:54:46.227755   19130 main.go:141] libmachine: (addons-800266) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266
	I1001 18:54:46.227768   19130 main.go:141] libmachine: (addons-800266) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines
	I1001 18:54:46.227774   19130 main.go:141] libmachine: (addons-800266) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 18:54:46.227785   19130 main.go:141] libmachine: (addons-800266) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198
	I1001 18:54:46.227805   19130 main.go:141] libmachine: (addons-800266) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 18:54:46.227817   19130 main.go:141] libmachine: (addons-800266) Creating domain...
	I1001 18:54:46.227829   19130 main.go:141] libmachine: (addons-800266) DBG | Checking permissions on dir: /home/jenkins
	I1001 18:54:46.227839   19130 main.go:141] libmachine: (addons-800266) DBG | Checking permissions on dir: /home
	I1001 18:54:46.227850   19130 main.go:141] libmachine: (addons-800266) DBG | Skipping /home - not owner
	I1001 18:54:46.228905   19130 main.go:141] libmachine: (addons-800266) define libvirt domain using xml: 
	I1001 18:54:46.228936   19130 main.go:141] libmachine: (addons-800266) <domain type='kvm'>
	I1001 18:54:46.228946   19130 main.go:141] libmachine: (addons-800266)   <name>addons-800266</name>
	I1001 18:54:46.228952   19130 main.go:141] libmachine: (addons-800266)   <memory unit='MiB'>4000</memory>
	I1001 18:54:46.228961   19130 main.go:141] libmachine: (addons-800266)   <vcpu>2</vcpu>
	I1001 18:54:46.228973   19130 main.go:141] libmachine: (addons-800266)   <features>
	I1001 18:54:46.228982   19130 main.go:141] libmachine: (addons-800266)     <acpi/>
	I1001 18:54:46.228989   19130 main.go:141] libmachine: (addons-800266)     <apic/>
	I1001 18:54:46.228998   19130 main.go:141] libmachine: (addons-800266)     <pae/>
	I1001 18:54:46.229004   19130 main.go:141] libmachine: (addons-800266)     
	I1001 18:54:46.229012   19130 main.go:141] libmachine: (addons-800266)   </features>
	I1001 18:54:46.229020   19130 main.go:141] libmachine: (addons-800266)   <cpu mode='host-passthrough'>
	I1001 18:54:46.229028   19130 main.go:141] libmachine: (addons-800266)   
	I1001 18:54:46.229043   19130 main.go:141] libmachine: (addons-800266)   </cpu>
	I1001 18:54:46.229054   19130 main.go:141] libmachine: (addons-800266)   <os>
	I1001 18:54:46.229066   19130 main.go:141] libmachine: (addons-800266)     <type>hvm</type>
	I1001 18:54:46.229077   19130 main.go:141] libmachine: (addons-800266)     <boot dev='cdrom'/>
	I1001 18:54:46.229085   19130 main.go:141] libmachine: (addons-800266)     <boot dev='hd'/>
	I1001 18:54:46.229094   19130 main.go:141] libmachine: (addons-800266)     <bootmenu enable='no'/>
	I1001 18:54:46.229101   19130 main.go:141] libmachine: (addons-800266)   </os>
	I1001 18:54:46.229109   19130 main.go:141] libmachine: (addons-800266)   <devices>
	I1001 18:54:46.229117   19130 main.go:141] libmachine: (addons-800266)     <disk type='file' device='cdrom'>
	I1001 18:54:46.229141   19130 main.go:141] libmachine: (addons-800266)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/boot2docker.iso'/>
	I1001 18:54:46.229156   19130 main.go:141] libmachine: (addons-800266)       <target dev='hdc' bus='scsi'/>
	I1001 18:54:46.229167   19130 main.go:141] libmachine: (addons-800266)       <readonly/>
	I1001 18:54:46.229176   19130 main.go:141] libmachine: (addons-800266)     </disk>
	I1001 18:54:46.229190   19130 main.go:141] libmachine: (addons-800266)     <disk type='file' device='disk'>
	I1001 18:54:46.229203   19130 main.go:141] libmachine: (addons-800266)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 18:54:46.229219   19130 main.go:141] libmachine: (addons-800266)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/addons-800266.rawdisk'/>
	I1001 18:54:46.229232   19130 main.go:141] libmachine: (addons-800266)       <target dev='hda' bus='virtio'/>
	I1001 18:54:46.229244   19130 main.go:141] libmachine: (addons-800266)     </disk>
	I1001 18:54:46.229252   19130 main.go:141] libmachine: (addons-800266)     <interface type='network'>
	I1001 18:54:46.229266   19130 main.go:141] libmachine: (addons-800266)       <source network='mk-addons-800266'/>
	I1001 18:54:46.229283   19130 main.go:141] libmachine: (addons-800266)       <model type='virtio'/>
	I1001 18:54:46.229307   19130 main.go:141] libmachine: (addons-800266)     </interface>
	I1001 18:54:46.229325   19130 main.go:141] libmachine: (addons-800266)     <interface type='network'>
	I1001 18:54:46.229331   19130 main.go:141] libmachine: (addons-800266)       <source network='default'/>
	I1001 18:54:46.229336   19130 main.go:141] libmachine: (addons-800266)       <model type='virtio'/>
	I1001 18:54:46.229344   19130 main.go:141] libmachine: (addons-800266)     </interface>
	I1001 18:54:46.229357   19130 main.go:141] libmachine: (addons-800266)     <serial type='pty'>
	I1001 18:54:46.229364   19130 main.go:141] libmachine: (addons-800266)       <target port='0'/>
	I1001 18:54:46.229368   19130 main.go:141] libmachine: (addons-800266)     </serial>
	I1001 18:54:46.229376   19130 main.go:141] libmachine: (addons-800266)     <console type='pty'>
	I1001 18:54:46.229385   19130 main.go:141] libmachine: (addons-800266)       <target type='serial' port='0'/>
	I1001 18:54:46.229392   19130 main.go:141] libmachine: (addons-800266)     </console>
	I1001 18:54:46.229396   19130 main.go:141] libmachine: (addons-800266)     <rng model='virtio'>
	I1001 18:54:46.229404   19130 main.go:141] libmachine: (addons-800266)       <backend model='random'>/dev/random</backend>
	I1001 18:54:46.229414   19130 main.go:141] libmachine: (addons-800266)     </rng>
	I1001 18:54:46.229450   19130 main.go:141] libmachine: (addons-800266)     
	I1001 18:54:46.229474   19130 main.go:141] libmachine: (addons-800266)     
	I1001 18:54:46.229484   19130 main.go:141] libmachine: (addons-800266)   </devices>
	I1001 18:54:46.229494   19130 main.go:141] libmachine: (addons-800266) </domain>
	I1001 18:54:46.229507   19130 main.go:141] libmachine: (addons-800266) 
	I1001 18:54:46.236399   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:7a:a5:eb in network default
	I1001 18:54:46.236906   19130 main.go:141] libmachine: (addons-800266) Ensuring networks are active...
	I1001 18:54:46.236926   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:46.237570   19130 main.go:141] libmachine: (addons-800266) Ensuring network default is active
	I1001 18:54:46.237872   19130 main.go:141] libmachine: (addons-800266) Ensuring network mk-addons-800266 is active
	I1001 18:54:46.239179   19130 main.go:141] libmachine: (addons-800266) Getting domain xml...
	I1001 18:54:46.239936   19130 main.go:141] libmachine: (addons-800266) Creating domain...
	I1001 18:54:47.656550   19130 main.go:141] libmachine: (addons-800266) Waiting to get IP...
	I1001 18:54:47.657509   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:47.657941   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:47.657994   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:47.657933   19152 retry.go:31] will retry after 287.757922ms: waiting for machine to come up
	I1001 18:54:47.947332   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:47.947608   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:47.947635   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:47.947559   19152 retry.go:31] will retry after 345.990873ms: waiting for machine to come up
	I1001 18:54:48.295045   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:48.295437   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:48.295459   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:48.295404   19152 retry.go:31] will retry after 397.709371ms: waiting for machine to come up
	I1001 18:54:48.696115   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:48.696512   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:48.696534   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:48.696469   19152 retry.go:31] will retry after 508.256405ms: waiting for machine to come up
	I1001 18:54:49.206276   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:49.206780   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:49.206809   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:49.206723   19152 retry.go:31] will retry after 734.08879ms: waiting for machine to come up
	I1001 18:54:49.942495   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:49.942835   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:49.942866   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:49.942803   19152 retry.go:31] will retry after 875.435099ms: waiting for machine to come up
	I1001 18:54:50.819451   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:50.819814   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:50.819847   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:50.819785   19152 retry.go:31] will retry after 955.050707ms: waiting for machine to come up
	I1001 18:54:51.777002   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:51.777479   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:51.777505   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:51.777419   19152 retry.go:31] will retry after 1.444896252s: waiting for machine to come up
	I1001 18:54:53.223789   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:53.224170   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:53.224204   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:53.224102   19152 retry.go:31] will retry after 1.214527673s: waiting for machine to come up
	I1001 18:54:54.440479   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:54.440898   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:54.440924   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:54.440860   19152 retry.go:31] will retry after 1.791674016s: waiting for machine to come up
	I1001 18:54:56.234623   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:56.235230   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:56.235254   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:56.235167   19152 retry.go:31] will retry after 1.939828883s: waiting for machine to come up
	I1001 18:54:58.177363   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:54:58.177904   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:54:58.177932   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:54:58.177862   19152 retry.go:31] will retry after 3.297408742s: waiting for machine to come up
	I1001 18:55:01.477029   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:01.477440   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:55:01.477461   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:55:01.477393   19152 retry.go:31] will retry after 2.96185412s: waiting for machine to come up
	I1001 18:55:04.442661   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:04.443064   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find current IP address of domain addons-800266 in network mk-addons-800266
	I1001 18:55:04.443085   19130 main.go:141] libmachine: (addons-800266) DBG | I1001 18:55:04.443024   19152 retry.go:31] will retry after 4.519636945s: waiting for machine to come up
	I1001 18:55:08.966536   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:08.967003   19130 main.go:141] libmachine: (addons-800266) Found IP for machine: 192.168.39.56
	I1001 18:55:08.967018   19130 main.go:141] libmachine: (addons-800266) Reserving static IP address...
	I1001 18:55:08.967054   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has current primary IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:08.967453   19130 main.go:141] libmachine: (addons-800266) DBG | unable to find host DHCP lease matching {name: "addons-800266", mac: "52:54:00:2e:3f:6d", ip: "192.168.39.56"} in network mk-addons-800266
	I1001 18:55:09.038868   19130 main.go:141] libmachine: (addons-800266) DBG | Getting to WaitForSSH function...
	I1001 18:55:09.038893   19130 main.go:141] libmachine: (addons-800266) Reserved static IP address: 192.168.39.56
	I1001 18:55:09.038906   19130 main.go:141] libmachine: (addons-800266) Waiting for SSH to be available...
	I1001 18:55:09.041494   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.041879   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:09.041907   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.042082   19130 main.go:141] libmachine: (addons-800266) DBG | Using SSH client type: external
	I1001 18:55:09.042111   19130 main.go:141] libmachine: (addons-800266) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa (-rw-------)
	I1001 18:55:09.042140   19130 main.go:141] libmachine: (addons-800266) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.56 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 18:55:09.042159   19130 main.go:141] libmachine: (addons-800266) DBG | About to run SSH command:
	I1001 18:55:09.042172   19130 main.go:141] libmachine: (addons-800266) DBG | exit 0
	I1001 18:55:09.172742   19130 main.go:141] libmachine: (addons-800266) DBG | SSH cmd err, output: <nil>: 
	I1001 18:55:09.173014   19130 main.go:141] libmachine: (addons-800266) KVM machine creation complete!
	I1001 18:55:09.173314   19130 main.go:141] libmachine: (addons-800266) Calling .GetConfigRaw
	I1001 18:55:09.173939   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:09.174135   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:09.174296   19130 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 18:55:09.174312   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:09.175520   19130 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 18:55:09.175543   19130 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 18:55:09.175551   19130 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 18:55:09.175560   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:09.177830   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.178171   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:09.178203   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.178309   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:09.178480   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:09.178647   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:09.178815   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:09.178945   19130 main.go:141] libmachine: Using SSH client type: native
	I1001 18:55:09.179201   19130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1001 18:55:09.179214   19130 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 18:55:09.287665   19130 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 18:55:09.287692   19130 main.go:141] libmachine: Detecting the provisioner...
	I1001 18:55:09.287706   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:09.290528   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.290883   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:09.290900   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.291013   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:09.291188   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:09.291309   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:09.291429   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:09.291541   19130 main.go:141] libmachine: Using SSH client type: native
	I1001 18:55:09.291745   19130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1001 18:55:09.291760   19130 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 18:55:09.396609   19130 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 18:55:09.396671   19130 main.go:141] libmachine: found compatible host: buildroot
	I1001 18:55:09.396678   19130 main.go:141] libmachine: Provisioning with buildroot...
	I1001 18:55:09.396684   19130 main.go:141] libmachine: (addons-800266) Calling .GetMachineName
	I1001 18:55:09.396947   19130 buildroot.go:166] provisioning hostname "addons-800266"
	I1001 18:55:09.396976   19130 main.go:141] libmachine: (addons-800266) Calling .GetMachineName
	I1001 18:55:09.397164   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:09.399516   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.399799   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:09.399827   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.399955   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:09.400153   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:09.400292   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:09.400569   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:09.400771   19130 main.go:141] libmachine: Using SSH client type: native
	I1001 18:55:09.400924   19130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1001 18:55:09.400935   19130 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-800266 && echo "addons-800266" | sudo tee /etc/hostname
	I1001 18:55:09.522797   19130 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-800266
	
	I1001 18:55:09.522825   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:09.525396   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.525782   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:09.525811   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.525942   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:09.526125   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:09.526368   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:09.526579   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:09.526757   19130 main.go:141] libmachine: Using SSH client type: native
	I1001 18:55:09.526928   19130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1001 18:55:09.526953   19130 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-800266' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-800266/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-800266' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 18:55:09.641587   19130 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 18:55:09.641619   19130 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 18:55:09.641687   19130 buildroot.go:174] setting up certificates
	I1001 18:55:09.641707   19130 provision.go:84] configureAuth start
	I1001 18:55:09.641722   19130 main.go:141] libmachine: (addons-800266) Calling .GetMachineName
	I1001 18:55:09.642058   19130 main.go:141] libmachine: (addons-800266) Calling .GetIP
	I1001 18:55:09.644641   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.644929   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:09.644958   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.645092   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:09.647308   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.647727   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:09.647747   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.647921   19130 provision.go:143] copyHostCerts
	I1001 18:55:09.647991   19130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 18:55:09.648126   19130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 18:55:09.648698   19130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 18:55:09.648769   19130 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.addons-800266 san=[127.0.0.1 192.168.39.56 addons-800266 localhost minikube]
	I1001 18:55:09.720055   19130 provision.go:177] copyRemoteCerts
	I1001 18:55:09.720117   19130 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 18:55:09.720139   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:09.722593   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.722878   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:09.722909   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.723021   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:09.723220   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:09.723352   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:09.723486   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:09.806311   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 18:55:09.829955   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 18:55:09.852875   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 18:55:09.876663   19130 provision.go:87] duration metric: took 234.933049ms to configureAuth
	I1001 18:55:09.876697   19130 buildroot.go:189] setting minikube options for container-runtime
	I1001 18:55:09.876880   19130 config.go:182] Loaded profile config "addons-800266": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 18:55:09.876963   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:09.879582   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.879902   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:09.879924   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:09.880154   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:09.880324   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:09.880504   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:09.880636   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:09.880798   19130 main.go:141] libmachine: Using SSH client type: native
	I1001 18:55:09.880952   19130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1001 18:55:09.880965   19130 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 18:55:10.103689   19130 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 18:55:10.103724   19130 main.go:141] libmachine: Checking connection to Docker...
	I1001 18:55:10.103734   19130 main.go:141] libmachine: (addons-800266) Calling .GetURL
	I1001 18:55:10.104989   19130 main.go:141] libmachine: (addons-800266) DBG | Using libvirt version 6000000
	I1001 18:55:10.107029   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.107465   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:10.107489   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.107665   19130 main.go:141] libmachine: Docker is up and running!
	I1001 18:55:10.107693   19130 main.go:141] libmachine: Reticulating splines...
	I1001 18:55:10.107701   19130 client.go:171] duration metric: took 24.525337699s to LocalClient.Create
	I1001 18:55:10.107724   19130 start.go:167] duration metric: took 24.52540274s to libmachine.API.Create "addons-800266"
	I1001 18:55:10.107742   19130 start.go:293] postStartSetup for "addons-800266" (driver="kvm2")
	I1001 18:55:10.107754   19130 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 18:55:10.107771   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:10.108014   19130 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 18:55:10.108038   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:10.110123   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.110416   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:10.110441   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.110534   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:10.110709   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:10.110838   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:10.110949   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:10.194149   19130 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 18:55:10.198077   19130 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 18:55:10.198110   19130 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 18:55:10.198208   19130 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 18:55:10.198253   19130 start.go:296] duration metric: took 90.498772ms for postStartSetup
	I1001 18:55:10.198290   19130 main.go:141] libmachine: (addons-800266) Calling .GetConfigRaw
	I1001 18:55:10.198844   19130 main.go:141] libmachine: (addons-800266) Calling .GetIP
	I1001 18:55:10.201351   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.201697   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:10.201727   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.201963   19130 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/config.json ...
	I1001 18:55:10.202182   19130 start.go:128] duration metric: took 24.638906267s to createHost
	I1001 18:55:10.202204   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:10.204338   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.204595   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:10.204640   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.204767   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:10.204960   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:10.205107   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:10.205266   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:10.205416   19130 main.go:141] libmachine: Using SSH client type: native
	I1001 18:55:10.205570   19130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1001 18:55:10.205579   19130 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 18:55:10.312750   19130 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727808910.290102030
	
	I1001 18:55:10.312772   19130 fix.go:216] guest clock: 1727808910.290102030
	I1001 18:55:10.312781   19130 fix.go:229] Guest: 2024-10-01 18:55:10.29010203 +0000 UTC Remote: 2024-10-01 18:55:10.202195194 +0000 UTC m=+24.739487507 (delta=87.906836ms)
	I1001 18:55:10.312825   19130 fix.go:200] guest clock delta is within tolerance: 87.906836ms
	I1001 18:55:10.312832   19130 start.go:83] releasing machines lock for "addons-800266", held for 24.749666187s
	I1001 18:55:10.312860   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:10.313125   19130 main.go:141] libmachine: (addons-800266) Calling .GetIP
	I1001 18:55:10.315583   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.315963   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:10.315991   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.316175   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:10.316658   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:10.316826   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:10.316933   19130 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 18:55:10.316988   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:10.317004   19130 ssh_runner.go:195] Run: cat /version.json
	I1001 18:55:10.317025   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:10.319273   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.319595   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:10.319620   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.319748   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.319778   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:10.319949   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:10.320097   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:10.320129   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:10.320151   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:10.320249   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:10.320346   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:10.320495   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:10.320656   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:10.320776   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:10.444454   19130 ssh_runner.go:195] Run: systemctl --version
	I1001 18:55:10.450206   19130 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 18:55:10.611244   19130 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 18:55:10.617482   19130 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 18:55:10.617543   19130 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 18:55:10.632957   19130 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 18:55:10.632980   19130 start.go:495] detecting cgroup driver to use...
	I1001 18:55:10.633036   19130 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 18:55:10.650705   19130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 18:55:10.666584   19130 docker.go:217] disabling cri-docker service (if available) ...
	I1001 18:55:10.666640   19130 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 18:55:10.683310   19130 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 18:55:10.699876   19130 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 18:55:10.825746   19130 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 18:55:10.980084   19130 docker.go:233] disabling docker service ...
	I1001 18:55:10.980158   19130 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 18:55:10.993523   19130 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 18:55:11.005606   19130 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 18:55:11.119488   19130 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 18:55:11.244662   19130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 18:55:11.257856   19130 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 18:55:11.275482   19130 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 18:55:11.275558   19130 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:55:11.285459   19130 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 18:55:11.285525   19130 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:55:11.295431   19130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:55:11.304948   19130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:55:11.314765   19130 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 18:55:11.324726   19130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:55:11.334767   19130 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:55:11.350940   19130 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:55:11.361015   19130 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 18:55:11.370035   19130 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 18:55:11.370089   19130 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 18:55:11.381742   19130 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 18:55:11.390795   19130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:55:11.515388   19130 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 18:55:11.603860   19130 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 18:55:11.603936   19130 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 18:55:11.608260   19130 start.go:563] Will wait 60s for crictl version
	I1001 18:55:11.608338   19130 ssh_runner.go:195] Run: which crictl
	I1001 18:55:11.611830   19130 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 18:55:11.653312   19130 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 18:55:11.653438   19130 ssh_runner.go:195] Run: crio --version
	I1001 18:55:11.681133   19130 ssh_runner.go:195] Run: crio --version
	I1001 18:55:11.712735   19130 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 18:55:11.713844   19130 main.go:141] libmachine: (addons-800266) Calling .GetIP
	I1001 18:55:11.716408   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:11.716730   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:11.716773   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:11.716941   19130 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 18:55:11.720927   19130 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 18:55:11.732425   19130 kubeadm.go:883] updating cluster {Name:addons-800266 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-800266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 18:55:11.732541   19130 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 18:55:11.732598   19130 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 18:55:11.762110   19130 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1001 18:55:11.762190   19130 ssh_runner.go:195] Run: which lz4
	I1001 18:55:11.765905   19130 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 18:55:11.769536   19130 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 18:55:11.769563   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1001 18:55:13.007062   19130 crio.go:462] duration metric: took 1.241197445s to copy over tarball
	I1001 18:55:13.007129   19130 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 18:55:15.197941   19130 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.190786279s)
	I1001 18:55:15.197975   19130 crio.go:469] duration metric: took 2.190886906s to extract the tarball
	I1001 18:55:15.197990   19130 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 18:55:15.234522   19130 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 18:55:15.277654   19130 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 18:55:15.277676   19130 cache_images.go:84] Images are preloaded, skipping loading
	I1001 18:55:15.277685   19130 kubeadm.go:934] updating node { 192.168.39.56 8443 v1.31.1 crio true true} ...
	I1001 18:55:15.277783   19130 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-800266 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-800266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 18:55:15.277848   19130 ssh_runner.go:195] Run: crio config
	I1001 18:55:15.324427   19130 cni.go:84] Creating CNI manager for ""
	I1001 18:55:15.324453   19130 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 18:55:15.324463   19130 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 18:55:15.324487   19130 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.56 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-800266 NodeName:addons-800266 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 18:55:15.324600   19130 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-800266"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.56
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.56"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 18:55:15.324654   19130 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 18:55:15.334181   19130 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 18:55:15.334244   19130 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 18:55:15.343195   19130 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1001 18:55:15.359252   19130 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 18:55:15.375182   19130 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I1001 18:55:15.392316   19130 ssh_runner.go:195] Run: grep 192.168.39.56	control-plane.minikube.internal$ /etc/hosts
	I1001 18:55:15.396057   19130 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 18:55:15.407370   19130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:55:15.534602   19130 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 18:55:15.552660   19130 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266 for IP: 192.168.39.56
	I1001 18:55:15.552692   19130 certs.go:194] generating shared ca certs ...
	I1001 18:55:15.552741   19130 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:15.552942   19130 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 18:55:15.623145   19130 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt ...
	I1001 18:55:15.623182   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt: {Name:mk05f953b4d77efd685e5c62d9dd4bde7959afb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:15.623355   19130 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key ...
	I1001 18:55:15.623366   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key: {Name:mka07ee01d58eddda5541c1019a73eefd54f1248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:15.623435   19130 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 18:55:15.869439   19130 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt ...
	I1001 18:55:15.869470   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt: {Name:mkbbeef0220b26662e60cc1bef4abf6707c29b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:15.869629   19130 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key ...
	I1001 18:55:15.869639   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key: {Name:mk9ed39639120dff6cf2537c93b22962f508fe4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:15.869714   19130 certs.go:256] generating profile certs ...
	I1001 18:55:15.869769   19130 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.key
	I1001 18:55:15.869790   19130 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt with IP's: []
	I1001 18:55:15.988965   19130 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt ...
	I1001 18:55:15.988993   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: {Name:mkcf5eaaec8c159e822bb977d77d86a7c8478423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:15.989155   19130 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.key ...
	I1001 18:55:15.989165   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.key: {Name:mk945838746efd1efe9fce55c262a25f2ad1fbd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:15.989232   19130 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.key.1f2c5f3f
	I1001 18:55:15.989258   19130 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.crt.1f2c5f3f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.56]
	I1001 18:55:16.410599   19130 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.crt.1f2c5f3f ...
	I1001 18:55:16.410634   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.crt.1f2c5f3f: {Name:mka317f1778f485e5e05792a9b3437352b18d724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:16.410826   19130 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.key.1f2c5f3f ...
	I1001 18:55:16.410842   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.key.1f2c5f3f: {Name:mk37ba30ffedca60d53f12cc36572f0ae020fe2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:16.410942   19130 certs.go:381] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.crt.1f2c5f3f -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.crt
	I1001 18:55:16.411022   19130 certs.go:385] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.key.1f2c5f3f -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.key
	I1001 18:55:16.411073   19130 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/proxy-client.key
	I1001 18:55:16.411091   19130 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/proxy-client.crt with IP's: []
	I1001 18:55:16.561425   19130 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/proxy-client.crt ...
	I1001 18:55:16.561457   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/proxy-client.crt: {Name:mkd4f7b51135c43a924e8e8c10071c6230b456b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:16.561631   19130 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/proxy-client.key ...
	I1001 18:55:16.561643   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/proxy-client.key: {Name:mka9b62c10d8df5e10df597d3e62631abaab9c37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:16.561830   19130 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 18:55:16.561869   19130 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 18:55:16.561897   19130 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 18:55:16.561925   19130 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 18:55:16.562550   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 18:55:16.588782   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 18:55:16.610915   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 18:55:16.633611   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 18:55:16.655948   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1001 18:55:16.678605   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 18:55:16.702145   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 18:55:16.726482   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 18:55:16.749789   19130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 18:55:16.771866   19130 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 18:55:16.787728   19130 ssh_runner.go:195] Run: openssl version
	I1001 18:55:16.793267   19130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 18:55:16.803175   19130 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:55:16.807272   19130 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:55:16.807334   19130 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:55:16.812930   19130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 18:55:16.822916   19130 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 18:55:16.826703   19130 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 18:55:16.826753   19130 kubeadm.go:392] StartCluster: {Name:addons-800266 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-800266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:55:16.826818   19130 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 18:55:16.826861   19130 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 18:55:16.860195   19130 cri.go:89] found id: ""
	I1001 18:55:16.860255   19130 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 18:55:16.869536   19130 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 18:55:16.878885   19130 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 18:55:16.887691   19130 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 18:55:16.887712   19130 kubeadm.go:157] found existing configuration files:
	
	I1001 18:55:16.887753   19130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 18:55:16.896852   19130 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 18:55:16.896906   19130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 18:55:16.905647   19130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 18:55:16.913783   19130 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 18:55:16.913831   19130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 18:55:16.922270   19130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 18:55:16.930603   19130 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 18:55:16.930654   19130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 18:55:16.939360   19130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 18:55:16.947727   19130 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 18:55:16.947797   19130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 18:55:16.956711   19130 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 18:55:17.004616   19130 kubeadm.go:310] W1001 18:55:16.988682     815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 18:55:17.005383   19130 kubeadm.go:310] W1001 18:55:16.989555     815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 18:55:17.109195   19130 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 18:55:26.823415   19130 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 18:55:26.823495   19130 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 18:55:26.823576   19130 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 18:55:26.823703   19130 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 18:55:26.823826   19130 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 18:55:26.823914   19130 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 18:55:26.825369   19130 out.go:235]   - Generating certificates and keys ...
	I1001 18:55:26.825456   19130 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 18:55:26.825543   19130 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 18:55:26.825634   19130 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 18:55:26.825712   19130 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 18:55:26.825799   19130 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 18:55:26.825889   19130 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 18:55:26.825980   19130 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 18:55:26.826141   19130 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-800266 localhost] and IPs [192.168.39.56 127.0.0.1 ::1]
	I1001 18:55:26.826227   19130 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 18:55:26.826368   19130 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-800266 localhost] and IPs [192.168.39.56 127.0.0.1 ::1]
	I1001 18:55:26.826465   19130 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 18:55:26.826557   19130 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 18:55:26.826627   19130 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 18:55:26.826718   19130 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 18:55:26.826791   19130 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 18:55:26.826874   19130 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 18:55:26.826949   19130 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 18:55:26.827038   19130 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 18:55:26.827109   19130 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 18:55:26.827220   19130 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 18:55:26.827316   19130 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 18:55:26.829589   19130 out.go:235]   - Booting up control plane ...
	I1001 18:55:26.829678   19130 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 18:55:26.829741   19130 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 18:55:26.829804   19130 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 18:55:26.829928   19130 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 18:55:26.830071   19130 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 18:55:26.830118   19130 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 18:55:26.830240   19130 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 18:55:26.830337   19130 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 18:55:26.830388   19130 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.006351501s
	I1001 18:55:26.830452   19130 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 18:55:26.830524   19130 kubeadm.go:310] [api-check] The API server is healthy after 4.50235606s
	I1001 18:55:26.830614   19130 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 18:55:26.830744   19130 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 18:55:26.830801   19130 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 18:55:26.830949   19130 kubeadm.go:310] [mark-control-plane] Marking the node addons-800266 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 18:55:26.830997   19130 kubeadm.go:310] [bootstrap-token] Using token: szuwwh.2qeffcf97dxqsrg4
	I1001 18:55:26.832123   19130 out.go:235]   - Configuring RBAC rules ...
	I1001 18:55:26.832217   19130 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 18:55:26.832286   19130 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 18:55:26.832431   19130 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 18:55:26.832543   19130 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 18:55:26.832660   19130 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 18:55:26.832750   19130 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 18:55:26.832910   19130 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 18:55:26.832947   19130 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 18:55:26.832986   19130 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 18:55:26.832995   19130 kubeadm.go:310] 
	I1001 18:55:26.833047   19130 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 18:55:26.833052   19130 kubeadm.go:310] 
	I1001 18:55:26.833147   19130 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 18:55:26.833161   19130 kubeadm.go:310] 
	I1001 18:55:26.833183   19130 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 18:55:26.833231   19130 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 18:55:26.833281   19130 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 18:55:26.833299   19130 kubeadm.go:310] 
	I1001 18:55:26.833347   19130 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 18:55:26.833353   19130 kubeadm.go:310] 
	I1001 18:55:26.833395   19130 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 18:55:26.833401   19130 kubeadm.go:310] 
	I1001 18:55:26.833456   19130 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 18:55:26.833520   19130 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 18:55:26.833589   19130 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 18:55:26.833608   19130 kubeadm.go:310] 
	I1001 18:55:26.833689   19130 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 18:55:26.833800   19130 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 18:55:26.833809   19130 kubeadm.go:310] 
	I1001 18:55:26.833909   19130 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token szuwwh.2qeffcf97dxqsrg4 \
	I1001 18:55:26.834032   19130 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 \
	I1001 18:55:26.834063   19130 kubeadm.go:310] 	--control-plane 
	I1001 18:55:26.834072   19130 kubeadm.go:310] 
	I1001 18:55:26.834181   19130 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 18:55:26.834189   19130 kubeadm.go:310] 
	I1001 18:55:26.834264   19130 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token szuwwh.2qeffcf97dxqsrg4 \
	I1001 18:55:26.834362   19130 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 
	I1001 18:55:26.834371   19130 cni.go:84] Creating CNI manager for ""
	I1001 18:55:26.834377   19130 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 18:55:26.835560   19130 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 18:55:26.836557   19130 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 18:55:26.846776   19130 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 18:55:26.864849   19130 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 18:55:26.864941   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:55:26.864965   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-800266 minikube.k8s.io/updated_at=2024_10_01T18_55_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=addons-800266 minikube.k8s.io/primary=true
	I1001 18:55:26.899023   19130 ops.go:34] apiserver oom_adj: -16
	I1001 18:55:27.002107   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:55:27.502522   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:55:28.002690   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:55:28.502745   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:55:29.002933   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:55:29.502748   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:55:30.002556   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:55:30.502899   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:55:31.002194   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:55:31.502428   19130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:55:31.665980   19130 kubeadm.go:1113] duration metric: took 4.801106914s to wait for elevateKubeSystemPrivileges
	I1001 18:55:31.666017   19130 kubeadm.go:394] duration metric: took 14.839266983s to StartCluster
	I1001 18:55:31.666042   19130 settings.go:142] acquiring lock: {Name:mkeb3fe64ed992373f048aef24eb0d675bcab60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:31.666197   19130 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 18:55:31.666705   19130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/kubeconfig: {Name:mk7ef19d546928001abf2478a3abe1d17765a591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:55:31.666985   19130 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 18:55:31.667015   19130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 18:55:31.667075   19130 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1001 18:55:31.667205   19130 config.go:182] Loaded profile config "addons-800266": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 18:55:31.667216   19130 addons.go:69] Setting yakd=true in profile "addons-800266"
	I1001 18:55:31.667235   19130 addons.go:234] Setting addon yakd=true in "addons-800266"
	I1001 18:55:31.667241   19130 addons.go:69] Setting ingress-dns=true in profile "addons-800266"
	I1001 18:55:31.667261   19130 addons.go:69] Setting metrics-server=true in profile "addons-800266"
	I1001 18:55:31.667258   19130 addons.go:69] Setting registry=true in profile "addons-800266"
	I1001 18:55:31.667281   19130 addons.go:69] Setting storage-provisioner=true in profile "addons-800266"
	I1001 18:55:31.667283   19130 addons.go:69] Setting inspektor-gadget=true in profile "addons-800266"
	I1001 18:55:31.667288   19130 addons.go:234] Setting addon registry=true in "addons-800266"
	I1001 18:55:31.667291   19130 addons.go:69] Setting ingress=true in profile "addons-800266"
	I1001 18:55:31.667295   19130 addons.go:234] Setting addon inspektor-gadget=true in "addons-800266"
	I1001 18:55:31.667304   19130 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-800266"
	I1001 18:55:31.667314   19130 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-800266"
	I1001 18:55:31.667319   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.667326   19130 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-800266"
	I1001 18:55:31.667329   19130 addons.go:69] Setting volcano=true in profile "addons-800266"
	I1001 18:55:31.667334   19130 addons.go:69] Setting volumesnapshots=true in profile "addons-800266"
	I1001 18:55:31.667340   19130 addons.go:234] Setting addon volcano=true in "addons-800266"
	I1001 18:55:31.667348   19130 addons.go:234] Setting addon volumesnapshots=true in "addons-800266"
	I1001 18:55:31.667358   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.667359   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.667369   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.667273   19130 addons.go:69] Setting cloud-spanner=true in profile "addons-800266"
	I1001 18:55:31.667443   19130 addons.go:234] Setting addon cloud-spanner=true in "addons-800266"
	I1001 18:55:31.667479   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.667786   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.667793   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.667804   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.667319   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.667294   19130 addons.go:234] Setting addon storage-provisioner=true in "addons-800266"
	I1001 18:55:31.667834   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.667836   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.667854   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.667858   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.667899   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.667925   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.667304   19130 addons.go:234] Setting addon ingress=true in "addons-800266"
	I1001 18:55:31.668022   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.667282   19130 addons.go:69] Setting gcp-auth=true in profile "addons-800266"
	I1001 18:55:31.668118   19130 mustload.go:65] Loading cluster: addons-800266
	I1001 18:55:31.668140   19130 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-800266"
	I1001 18:55:31.668180   19130 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-800266"
	I1001 18:55:31.668195   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.668212   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.668224   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.668282   19130 config.go:182] Loaded profile config "addons-800266": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 18:55:31.668297   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.668320   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.668382   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.668408   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.668567   19130 addons.go:69] Setting default-storageclass=true in profile "addons-800266"
	I1001 18:55:31.668582   19130 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-800266"
	I1001 18:55:31.668641   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.668642   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.668660   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.668666   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.667273   19130 addons.go:234] Setting addon metrics-server=true in "addons-800266"
	I1001 18:55:31.667825   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.667323   19130 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-800266"
	I1001 18:55:31.667269   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.668993   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.669064   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.669712   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.669746   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.667272   19130 addons.go:234] Setting addon ingress-dns=true in "addons-800266"
	I1001 18:55:31.670244   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.670634   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.670662   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.673142   19130 out.go:177] * Verifying Kubernetes components...
	I1001 18:55:31.673329   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.673415   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.673925   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.673984   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.680503   19130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:55:31.690767   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37929
	I1001 18:55:31.690841   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36693
	I1001 18:55:31.691179   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33717
	I1001 18:55:31.691425   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36565
	I1001 18:55:31.691509   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41113
	I1001 18:55:31.691999   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43565
	I1001 18:55:31.692009   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.692014   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.692546   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.692574   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.692718   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.692951   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.692968   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.692973   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.693349   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.693596   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.693631   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.693888   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.693917   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.693892   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.694274   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.694420   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.694442   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.694466   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.695332   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.708725   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.708763   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.708924   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.708950   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.709011   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.709035   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.710378   19130 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-800266"
	I1001 18:55:31.710421   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.710785   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.710822   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.713056   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.713504   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.713704   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.713729   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.714113   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.714392   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.714417   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.714818   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.721857   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I1001 18:55:31.722251   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.722778   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.722799   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.723264   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.723854   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.723974   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.732891   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.732943   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.733023   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.733042   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.750470   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37989
	I1001 18:55:31.750860   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45171
	I1001 18:55:31.750954   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.751020   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42719
	I1001 18:55:31.751096   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39129
	I1001 18:55:31.751387   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.751495   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.751965   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.751986   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.752116   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.752130   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.752245   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.752255   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.752714   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.752725   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.752775   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.753346   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.753386   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.753604   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.753660   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.754012   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.754048   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.754248   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.754272   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.754651   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.755197   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.755236   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.756270   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.756424   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35871
	I1001 18:55:31.756875   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.757392   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.757407   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.757744   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.758258   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.758281   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.758956   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43089
	I1001 18:55:31.759011   19130 out.go:177]   - Using image docker.io/registry:2.8.3
	I1001 18:55:31.759203   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43281
	I1001 18:55:31.759320   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.759794   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.759813   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.759874   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.760288   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.760512   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.761400   19130 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1001 18:55:31.761711   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.761732   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.762176   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.762361   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.762409   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.762492   19130 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1001 18:55:31.762507   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1001 18:55:31.762526   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.764085   19130 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1001 18:55:31.765055   19130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1001 18:55:31.765077   19130 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1001 18:55:31.765102   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.768558   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.768570   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40601
	I1001 18:55:31.769998   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.770715   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40553
	I1001 18:55:31.770719   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.770742   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.770795   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.770918   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44381
	I1001 18:55:31.771091   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.771095   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.771284   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.771357   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.771587   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.771607   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.771680   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.771695   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.771795   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.771806   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.771939   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.771957   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.771999   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.772122   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:31.772185   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.772259   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.772444   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.772459   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.772528   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.772659   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:31.772783   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.772817   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.773266   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.773346   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.774093   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.775569   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42809
	I1001 18:55:31.775966   19130 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 18:55:31.776142   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.776773   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.776791   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.776854   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.777479   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.777511   19130 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 18:55:31.777519   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.777527   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 18:55:31.777545   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.778182   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.778536   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.781407   19130 addons.go:234] Setting addon default-storageclass=true in "addons-800266"
	I1001 18:55:31.781453   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.781814   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.781847   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.782044   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.782391   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33453
	I1001 18:55:31.782586   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.782602   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.782706   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.782814   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.782971   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.783183   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.783328   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.783340   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.783534   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:31.785471   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41145
	I1001 18:55:31.785942   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.786002   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.786199   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.788243   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.788885   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.788905   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.789786   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.790107   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.790184   19130 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1001 18:55:31.790802   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35599
	I1001 18:55:31.791060   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39279
	I1001 18:55:31.791431   19130 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 18:55:31.791449   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.791450   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1001 18:55:31.791501   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.791893   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.792463   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.792481   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.793038   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.793062   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.793435   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.793679   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.795141   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.795617   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.796067   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.796388   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.796409   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.796641   19130 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 18:55:31.796752   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.796839   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.797190   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.797208   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.797367   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.797514   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:31.797808   19130 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1001 18:55:31.799107   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:31.799401   19130 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 18:55:31.799502   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.799411   19130 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1001 18:55:31.799553   19130 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1001 18:55:31.799575   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.799539   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.801890   19130 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1001 18:55:31.803010   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.803608   19130 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 18:55:31.803631   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1001 18:55:31.803653   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.803807   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.803837   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.804047   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.804208   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.804345   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.804514   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:31.807571   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39875
	I1001 18:55:31.807618   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.807810   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40893
	I1001 18:55:31.808001   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.808142   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.808159   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.808339   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.808507   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.808602   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.808624   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.808756   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.808754   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:31.808777   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.809193   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.809409   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.811325   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.812025   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.812050   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.812525   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.812789   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.813573   19130 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1001 18:55:31.814543   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.815571   19130 out.go:177]   - Using image docker.io/busybox:stable
	I1001 18:55:31.815583   19130 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1001 18:55:31.816266   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38477
	I1001 18:55:31.816467   19130 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1001 18:55:31.816499   19130 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1001 18:55:31.816516   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39245
	I1001 18:55:31.816520   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.816484   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43675
	I1001 18:55:31.816846   19130 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 18:55:31.816873   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1001 18:55:31.816895   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.817006   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.817090   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.817740   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.817771   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.818585   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.818934   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.819099   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35865
	I1001 18:55:31.819276   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.819590   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.819765   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.819786   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.820123   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.820329   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.820350   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.820657   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.821027   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.821106   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.821584   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.821716   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.821753   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.821767   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.821959   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.821976   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.822087   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.822248   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.822557   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.822654   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.822922   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.822957   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.823011   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.823182   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.823181   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:31.823267   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41147
	I1001 18:55:31.823490   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.823669   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:31.823820   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.823894   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.823913   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.823914   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.824381   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.824399   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.824778   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.825303   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:31.825339   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:31.825440   19130 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1001 18:55:31.825495   19130 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1001 18:55:31.825606   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.825868   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:31.825883   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:31.826597   19130 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1001 18:55:31.826610   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1001 18:55:31.826626   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.827319   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45937
	I1001 18:55:31.827337   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.827394   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:31.827414   19130 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1001 18:55:31.827439   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:31.827816   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:31.827827   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:31.827838   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:31.827971   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.828393   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:31.828422   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:31.828494   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.828510   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.828435   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:31.828572   19130 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	W1001 18:55:31.828638   19130 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1001 18:55:31.828870   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.829101   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.829574   19130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1001 18:55:31.829669   19130 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 18:55:31.829688   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1001 18:55:31.829707   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.831464   19130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1001 18:55:31.831630   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.831709   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.832455   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.832477   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.832670   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.832798   19130 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1001 18:55:31.832801   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.833041   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.833153   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:31.833750   19130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1001 18:55:31.833828   19130 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1001 18:55:31.833842   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.833844   19130 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1001 18:55:31.833865   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.833963   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35187
	I1001 18:55:31.834223   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.834237   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.834430   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.834564   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.834635   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.834723   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.834862   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:31.835391   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.835404   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.835607   19130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1001 18:55:31.835785   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.835994   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.837304   19130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1001 18:55:31.837401   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.837418   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.837432   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.837568   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.837746   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.837931   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.838076   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:31.839309   19130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1001 18:55:31.840143   19130 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1001 18:55:31.840159   19130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1001 18:55:31.840174   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.844430   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.844472   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.844492   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.844505   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.844607   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.844764   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.844898   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	W1001 18:55:31.849821   19130 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60608->192.168.39.56:22: read: connection reset by peer
	I1001 18:55:31.849857   19130 retry.go:31] will retry after 189.152368ms: ssh: handshake failed: read tcp 192.168.39.1:60608->192.168.39.56:22: read: connection reset by peer
	I1001 18:55:31.852259   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41861
	I1001 18:55:31.852851   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:31.853383   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:31.853403   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:31.853754   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:31.853943   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:31.855971   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:31.856197   19130 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 18:55:31.856216   19130 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 18:55:31.856237   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:31.859336   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.859786   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:31.859811   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:31.860005   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:31.860172   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:31.860318   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:31.860466   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:32.151512   19130 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1001 18:55:32.151546   19130 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1001 18:55:32.221551   19130 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 18:55:32.221638   19130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 18:55:32.276121   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 18:55:32.280552   19130 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1001 18:55:32.280576   19130 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1001 18:55:32.305872   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 18:55:32.308237   19130 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1001 18:55:32.308261   19130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1001 18:55:32.327159   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 18:55:32.334239   19130 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1001 18:55:32.334260   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1001 18:55:32.335955   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1001 18:55:32.353745   19130 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1001 18:55:32.353788   19130 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1001 18:55:32.358636   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 18:55:32.364700   19130 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1001 18:55:32.364719   19130 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1001 18:55:32.365506   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 18:55:32.367135   19130 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1001 18:55:32.367153   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1001 18:55:32.381753   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 18:55:32.516910   19130 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1001 18:55:32.516943   19130 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1001 18:55:32.536224   19130 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1001 18:55:32.536252   19130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1001 18:55:32.546478   19130 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1001 18:55:32.546506   19130 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1001 18:55:32.554299   19130 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1001 18:55:32.554336   19130 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1001 18:55:32.573148   19130 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1001 18:55:32.573171   19130 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1001 18:55:32.584795   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1001 18:55:32.687156   19130 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1001 18:55:32.687187   19130 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1001 18:55:32.705017   19130 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1001 18:55:32.705040   19130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1001 18:55:32.785218   19130 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1001 18:55:32.785242   19130 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1001 18:55:32.797466   19130 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 18:55:32.797492   19130 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1001 18:55:32.853214   19130 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1001 18:55:32.853243   19130 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1001 18:55:32.965364   19130 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1001 18:55:32.965390   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1001 18:55:33.018514   19130 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1001 18:55:33.018542   19130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1001 18:55:33.080376   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 18:55:33.100949   19130 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1001 18:55:33.100979   19130 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1001 18:55:33.141589   19130 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1001 18:55:33.141619   19130 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1001 18:55:33.173678   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1001 18:55:33.269056   19130 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1001 18:55:33.269091   19130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1001 18:55:33.357862   19130 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1001 18:55:33.357891   19130 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1001 18:55:33.391029   19130 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 18:55:33.391052   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1001 18:55:33.454079   19130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1001 18:55:33.454101   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1001 18:55:33.605046   19130 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1001 18:55:33.605076   19130 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1001 18:55:33.753379   19130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1001 18:55:33.753409   19130 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1001 18:55:33.771591   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 18:55:33.871945   19130 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1001 18:55:33.871974   19130 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1001 18:55:33.923301   19130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1001 18:55:33.923324   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1001 18:55:33.993955   19130 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1001 18:55:33.993979   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1001 18:55:34.088724   19130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1001 18:55:34.088747   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1001 18:55:34.236254   19130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 18:55:34.236286   19130 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1001 18:55:34.352717   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1001 18:55:34.471689   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 18:55:34.508854   19130 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.287176493s)
	I1001 18:55:34.508900   19130 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1001 18:55:34.508913   19130 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.28732779s)
	I1001 18:55:34.509641   19130 node_ready.go:35] waiting up to 6m0s for node "addons-800266" to be "Ready" ...
	I1001 18:55:34.516880   19130 node_ready.go:49] node "addons-800266" has status "Ready":"True"
	I1001 18:55:34.516926   19130 node_ready.go:38] duration metric: took 7.250218ms for node "addons-800266" to be "Ready" ...
	I1001 18:55:34.516937   19130 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 18:55:34.529252   19130 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g6xbn" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:35.018917   19130 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-800266" context rescaled to 1 replicas
	I1001 18:55:35.625874   19130 pod_ready.go:93] pod "coredns-7c65d6cfc9-g6xbn" in "kube-system" namespace has status "Ready":"True"
	I1001 18:55:35.625897   19130 pod_ready.go:82] duration metric: took 1.096620077s for pod "coredns-7c65d6cfc9-g6xbn" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:35.625906   19130 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-h656l" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:35.993012   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.716854427s)
	I1001 18:55:35.993071   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:35.993084   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:35.993091   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.687174378s)
	I1001 18:55:35.993140   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:35.993156   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:35.993497   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:35.993504   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:35.993520   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:35.993530   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:35.993531   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:35.993543   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:35.993552   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:35.993566   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:35.993578   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:35.993600   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:35.993922   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:35.993953   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:35.993962   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:35.993992   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:35.994016   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:36.894838   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.567639407s)
	I1001 18:55:36.894882   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.558908033s)
	I1001 18:55:36.894907   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:36.894907   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:36.894919   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:36.894923   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:36.894938   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.5362666s)
	I1001 18:55:36.894978   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:36.894993   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:36.895309   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:36.895311   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:36.895325   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:36.895328   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:36.895340   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:36.895343   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:36.895327   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:36.895353   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:36.895349   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:36.895359   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:36.895375   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:36.895384   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:36.895405   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:36.895384   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:36.895425   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:36.897378   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:36.897390   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:36.897388   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:36.897472   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:36.897399   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:36.897493   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:36.897420   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:36.897426   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:36.897561   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:36.990964   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:36.990984   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:36.991265   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:36.991279   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:36.991304   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:37.197619   19130 pod_ready.go:93] pod "coredns-7c65d6cfc9-h656l" in "kube-system" namespace has status "Ready":"True"
	I1001 18:55:37.197646   19130 pod_ready.go:82] duration metric: took 1.571733309s for pod "coredns-7c65d6cfc9-h656l" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:37.197656   19130 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-800266" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:37.230347   19130 pod_ready.go:93] pod "etcd-addons-800266" in "kube-system" namespace has status "Ready":"True"
	I1001 18:55:37.230370   19130 pod_ready.go:82] duration metric: took 32.707875ms for pod "etcd-addons-800266" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:37.230383   19130 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-800266" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:37.293032   19130 pod_ready.go:93] pod "kube-apiserver-addons-800266" in "kube-system" namespace has status "Ready":"True"
	I1001 18:55:37.293059   19130 pod_ready.go:82] duration metric: took 62.668736ms for pod "kube-apiserver-addons-800266" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:37.293072   19130 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-800266" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:37.312542   19130 pod_ready.go:93] pod "kube-controller-manager-addons-800266" in "kube-system" namespace has status "Ready":"True"
	I1001 18:55:37.312568   19130 pod_ready.go:82] duration metric: took 19.487958ms for pod "kube-controller-manager-addons-800266" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:37.312579   19130 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x9xtt" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:37.328585   19130 pod_ready.go:93] pod "kube-proxy-x9xtt" in "kube-system" namespace has status "Ready":"True"
	I1001 18:55:37.328607   19130 pod_ready.go:82] duration metric: took 16.022038ms for pod "kube-proxy-x9xtt" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:37.328618   19130 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-800266" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:38.852207   19130 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1001 18:55:38.852242   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:38.855173   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:38.855652   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:38.855682   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:38.855897   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:38.856141   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:38.856308   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:38.856469   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:38.915172   19130 pod_ready.go:93] pod "kube-scheduler-addons-800266" in "kube-system" namespace has status "Ready":"True"
	I1001 18:55:38.915196   19130 pod_ready.go:82] duration metric: took 1.58657044s for pod "kube-scheduler-addons-800266" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:38.915207   19130 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-brmgb" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:39.079380   19130 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1001 18:55:39.222208   19130 addons.go:234] Setting addon gcp-auth=true in "addons-800266"
	I1001 18:55:39.222261   19130 host.go:66] Checking if "addons-800266" exists ...
	I1001 18:55:39.222641   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:39.222688   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:39.238651   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42851
	I1001 18:55:39.239165   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:39.239709   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:39.239725   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:39.240016   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:39.240467   19130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:55:39.240518   19130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:55:39.256916   19130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41449
	I1001 18:55:39.257474   19130 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:55:39.257960   19130 main.go:141] libmachine: Using API Version  1
	I1001 18:55:39.257979   19130 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:55:39.258374   19130 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:55:39.258590   19130 main.go:141] libmachine: (addons-800266) Calling .GetState
	I1001 18:55:39.260194   19130 main.go:141] libmachine: (addons-800266) Calling .DriverName
	I1001 18:55:39.260431   19130 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1001 18:55:39.260459   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHHostname
	I1001 18:55:39.263038   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:39.263415   19130 main.go:141] libmachine: (addons-800266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:3f:6d", ip: ""} in network mk-addons-800266: {Iface:virbr1 ExpiryTime:2024-10-01 19:55:00 +0000 UTC Type:0 Mac:52:54:00:2e:3f:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:addons-800266 Clientid:01:52:54:00:2e:3f:6d}
	I1001 18:55:39.263442   19130 main.go:141] libmachine: (addons-800266) DBG | domain addons-800266 has defined IP address 192.168.39.56 and MAC address 52:54:00:2e:3f:6d in network mk-addons-800266
	I1001 18:55:39.263612   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHPort
	I1001 18:55:39.263788   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHKeyPath
	I1001 18:55:39.263953   19130 main.go:141] libmachine: (addons-800266) Calling .GetSSHUsername
	I1001 18:55:39.264105   19130 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/addons-800266/id_rsa Username:docker}
	I1001 18:55:39.514349   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.148811867s)
	I1001 18:55:39.514407   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.514405   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.132620731s)
	I1001 18:55:39.514421   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.514441   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.514456   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.514465   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.929632059s)
	I1001 18:55:39.514502   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.514515   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.514697   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.514711   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.514719   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.514718   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.434311584s)
	I1001 18:55:39.514726   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.514742   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.514753   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.514838   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.341132511s)
	I1001 18:55:39.514859   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.514871   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.514888   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.514914   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.514922   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.514928   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.515233   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:39.515283   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.515291   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.515354   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.743724399s)
	W1001 18:55:39.515383   19130 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 18:55:39.515414   19130 retry.go:31] will retry after 247.380756ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 18:55:39.515508   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.162744865s)
	I1001 18:55:39.515574   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.515599   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.515657   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:39.515696   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:39.515720   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.515726   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.515735   19130 addons.go:475] Verifying addon ingress=true in "addons-800266"
	I1001 18:55:39.515952   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:39.515999   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.516007   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.516015   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.516025   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.516344   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.516524   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.516539   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.516546   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.516718   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:39.516763   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.516770   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.516780   19130 addons.go:475] Verifying addon registry=true in "addons-800266"
	I1001 18:55:39.516906   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.518843   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.518855   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.516933   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:39.518862   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.516957   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.518879   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.518888   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.518895   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.516971   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:39.516998   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.518959   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.519087   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:39.519114   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:39.519148   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.519155   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.519386   19130 out.go:177] * Verifying ingress addon...
	I1001 18:55:39.519662   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.519675   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.519692   19130 addons.go:475] Verifying addon metrics-server=true in "addons-800266"
	I1001 18:55:39.520324   19130 out.go:177] * Verifying registry addon...
	I1001 18:55:39.520335   19130 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-800266 service yakd-dashboard -n yakd-dashboard
	
	I1001 18:55:39.521344   19130 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1001 18:55:39.522166   19130 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1001 18:55:39.556888   19130 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1001 18:55:39.556912   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:39.557314   19130 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1001 18:55:39.557334   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:39.573325   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:39.573345   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:39.573629   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:39.573647   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:39.763251   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 18:55:40.027429   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:40.029494   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:40.258774   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.787032566s)
	I1001 18:55:40.258827   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:40.258851   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:40.259133   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:40.259170   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:40.259189   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:40.259201   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:40.259480   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:40.259569   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:40.259584   19130 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-800266"
	I1001 18:55:40.259549   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:40.260227   19130 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 18:55:40.260885   19130 out.go:177] * Verifying csi-hostpath-driver addon...
	I1001 18:55:40.262290   19130 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1001 18:55:40.263285   19130 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1001 18:55:40.263689   19130 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1001 18:55:40.263708   19130 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1001 18:55:40.273519   19130 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1001 18:55:40.273549   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:40.352400   19130 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1001 18:55:40.352429   19130 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1001 18:55:40.419876   19130 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 18:55:40.419902   19130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1001 18:55:40.457274   19130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 18:55:40.531943   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:40.532048   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:40.849941   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:40.923365   19130 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-brmgb" in "kube-system" namespace has status "Ready":"False"
	I1001 18:55:41.026416   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:41.026419   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:41.269269   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:41.455114   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.691821994s)
	I1001 18:55:41.455174   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:41.455188   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:41.455536   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:41.455553   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:41.455559   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:41.455566   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:41.455540   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:41.455831   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:41.455849   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:41.526226   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:41.526896   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:41.784716   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:41.796016   19130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.338692765s)
	I1001 18:55:41.796066   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:41.796078   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:41.796355   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:41.796416   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:41.796434   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:41.796448   19130 main.go:141] libmachine: Making call to close driver server
	I1001 18:55:41.796481   19130 main.go:141] libmachine: (addons-800266) Calling .Close
	I1001 18:55:41.796724   19130 main.go:141] libmachine: (addons-800266) DBG | Closing plugin on server side
	I1001 18:55:41.796778   19130 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:55:41.796792   19130 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:55:41.797764   19130 addons.go:475] Verifying addon gcp-auth=true in "addons-800266"
	I1001 18:55:41.799079   19130 out.go:177] * Verifying gcp-auth addon...
	I1001 18:55:41.801295   19130 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1001 18:55:41.883115   19130 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1001 18:55:41.883137   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:42.027005   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:42.027418   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:42.279289   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:42.317026   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:42.526596   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:42.528130   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:42.768120   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:42.805316   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:43.027053   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:43.027083   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:43.268570   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:43.304394   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:43.421742   19130 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-brmgb" in "kube-system" namespace has status "Ready":"False"
	I1001 18:55:43.525731   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:43.526353   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:43.769000   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:43.805564   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:44.026267   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:44.027291   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:44.268850   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:44.305822   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:44.527864   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:44.529119   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:44.768724   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:44.805493   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:45.026135   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:45.027064   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:45.269201   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:45.306677   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:45.526185   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:45.527696   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:45.768603   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:45.808120   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:45.921864   19130 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-brmgb" in "kube-system" namespace has status "Ready":"False"
	I1001 18:55:46.026901   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:46.028431   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:46.268786   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:46.305022   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:46.526101   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:46.527828   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:46.767884   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:46.805178   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:47.195355   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:47.196523   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:47.268920   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:47.305320   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:47.525987   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:47.526412   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:47.768105   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:47.805635   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:47.921962   19130 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-brmgb" in "kube-system" namespace has status "Ready":"False"
	I1001 18:55:48.025605   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:48.026681   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:48.267626   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:48.304946   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:48.527443   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:48.528136   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:48.768209   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:48.805759   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:49.027381   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:49.027882   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:49.269578   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:49.304657   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:49.526090   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:49.526787   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:49.768182   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:49.805094   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:49.921491   19130 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-brmgb" in "kube-system" namespace has status "Ready":"True"
	I1001 18:55:49.921516   19130 pod_ready.go:82] duration metric: took 11.006302036s for pod "nvidia-device-plugin-daemonset-brmgb" in "kube-system" namespace to be "Ready" ...
	I1001 18:55:49.921526   19130 pod_ready.go:39] duration metric: took 15.404576906s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 18:55:49.921545   19130 api_server.go:52] waiting for apiserver process to appear ...
	I1001 18:55:49.921607   19130 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:55:49.939781   19130 api_server.go:72] duration metric: took 18.272755689s to wait for apiserver process to appear ...
	I1001 18:55:49.939808   19130 api_server.go:88] waiting for apiserver healthz status ...
	I1001 18:55:49.939834   19130 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I1001 18:55:49.944768   19130 api_server.go:279] https://192.168.39.56:8443/healthz returned 200:
	ok
	I1001 18:55:49.945803   19130 api_server.go:141] control plane version: v1.31.1
	I1001 18:55:49.945823   19130 api_server.go:131] duration metric: took 6.00747ms to wait for apiserver health ...
	I1001 18:55:49.945832   19130 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 18:55:49.954117   19130 system_pods.go:59] 17 kube-system pods found
	I1001 18:55:49.954160   19130 system_pods.go:61] "coredns-7c65d6cfc9-h656l" [1cf425bf-e9a1-4f2b-98e3-38dc3f94625d] Running
	I1001 18:55:49.954169   19130 system_pods.go:61] "csi-hostpath-attacher-0" [7a3746e4-0f9e-4707-8c0f-a2102389ae24] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1001 18:55:49.954175   19130 system_pods.go:61] "csi-hostpath-resizer-0" [56f788c0-c09f-459b-8f37-4bc5cbc483ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1001 18:55:49.954183   19130 system_pods.go:61] "csi-hostpathplugin-jc2wz" [22221d1d-2188-4e3c-a522-e2b0dd98aa60] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1001 18:55:49.954188   19130 system_pods.go:61] "etcd-addons-800266" [1f78a1eb-6c5c-4021-9dc7-d952fce79496] Running
	I1001 18:55:49.954192   19130 system_pods.go:61] "kube-apiserver-addons-800266" [a8e4d043-4ab5-4596-9103-98f447af4070] Running
	I1001 18:55:49.954196   19130 system_pods.go:61] "kube-controller-manager-addons-800266" [344b2879-14c9-4e92-a4f9-394055ad3082] Running
	I1001 18:55:49.954201   19130 system_pods.go:61] "kube-ingress-dns-minikube" [c841f466-ff18-4ddc-8a0c-d01d392f05e4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1001 18:55:49.954207   19130 system_pods.go:61] "kube-proxy-x9xtt" [f0d9fe6d-e5fb-43c9-b8d3-8075cec0186a] Running
	I1001 18:55:49.954211   19130 system_pods.go:61] "kube-scheduler-addons-800266" [47ca10e7-9913-404a-b5a0-cef41f056ead] Running
	I1001 18:55:49.954219   19130 system_pods.go:61] "metrics-server-84c5f94fbc-7mp6j" [f319c15f-c9b0-400d-89b5-d388e9a49218] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 18:55:49.954223   19130 system_pods.go:61] "nvidia-device-plugin-daemonset-brmgb" [8958de05-2c3e-499b-9290-48c68cef124f] Running
	I1001 18:55:49.954228   19130 system_pods.go:61] "registry-66c9cd494c-s7g57" [973537c4-844f-4bcc-addb-882999c8dbbe] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1001 18:55:49.954233   19130 system_pods.go:61] "registry-proxy-tpcpz" [41439ce9-e054-4a4f-ab24-294daf5ce65a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1001 18:55:49.954241   19130 system_pods.go:61] "snapshot-controller-56fcc65765-6kh72" [4448db04-0896-4ccc-a4ea-eeaa1f1670a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 18:55:49.954249   19130 system_pods.go:61] "snapshot-controller-56fcc65765-d7cj7" [78339872-e21b-4348-9374-e13f9b6d4884] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 18:55:49.954252   19130 system_pods.go:61] "storage-provisioner" [03188f24-2d63-42be-9351-a533a36261f1] Running
	I1001 18:55:49.954258   19130 system_pods.go:74] duration metric: took 8.420329ms to wait for pod list to return data ...
	I1001 18:55:49.954265   19130 default_sa.go:34] waiting for default service account to be created ...
	I1001 18:55:49.956804   19130 default_sa.go:45] found service account: "default"
	I1001 18:55:49.956826   19130 default_sa.go:55] duration metric: took 2.554185ms for default service account to be created ...
	I1001 18:55:49.956835   19130 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 18:55:49.963541   19130 system_pods.go:86] 17 kube-system pods found
	I1001 18:55:49.963568   19130 system_pods.go:89] "coredns-7c65d6cfc9-h656l" [1cf425bf-e9a1-4f2b-98e3-38dc3f94625d] Running
	I1001 18:55:49.963575   19130 system_pods.go:89] "csi-hostpath-attacher-0" [7a3746e4-0f9e-4707-8c0f-a2102389ae24] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1001 18:55:49.963582   19130 system_pods.go:89] "csi-hostpath-resizer-0" [56f788c0-c09f-459b-8f37-4bc5cbc483ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1001 18:55:49.963589   19130 system_pods.go:89] "csi-hostpathplugin-jc2wz" [22221d1d-2188-4e3c-a522-e2b0dd98aa60] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1001 18:55:49.963594   19130 system_pods.go:89] "etcd-addons-800266" [1f78a1eb-6c5c-4021-9dc7-d952fce79496] Running
	I1001 18:55:49.963599   19130 system_pods.go:89] "kube-apiserver-addons-800266" [a8e4d043-4ab5-4596-9103-98f447af4070] Running
	I1001 18:55:49.963602   19130 system_pods.go:89] "kube-controller-manager-addons-800266" [344b2879-14c9-4e92-a4f9-394055ad3082] Running
	I1001 18:55:49.963608   19130 system_pods.go:89] "kube-ingress-dns-minikube" [c841f466-ff18-4ddc-8a0c-d01d392f05e4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1001 18:55:49.963611   19130 system_pods.go:89] "kube-proxy-x9xtt" [f0d9fe6d-e5fb-43c9-b8d3-8075cec0186a] Running
	I1001 18:55:49.963614   19130 system_pods.go:89] "kube-scheduler-addons-800266" [47ca10e7-9913-404a-b5a0-cef41f056ead] Running
	I1001 18:55:49.963630   19130 system_pods.go:89] "metrics-server-84c5f94fbc-7mp6j" [f319c15f-c9b0-400d-89b5-d388e9a49218] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 18:55:49.963636   19130 system_pods.go:89] "nvidia-device-plugin-daemonset-brmgb" [8958de05-2c3e-499b-9290-48c68cef124f] Running
	I1001 18:55:49.963642   19130 system_pods.go:89] "registry-66c9cd494c-s7g57" [973537c4-844f-4bcc-addb-882999c8dbbe] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1001 18:55:49.963650   19130 system_pods.go:89] "registry-proxy-tpcpz" [41439ce9-e054-4a4f-ab24-294daf5ce65a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1001 18:55:49.963655   19130 system_pods.go:89] "snapshot-controller-56fcc65765-6kh72" [4448db04-0896-4ccc-a4ea-eeaa1f1670a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 18:55:49.963661   19130 system_pods.go:89] "snapshot-controller-56fcc65765-d7cj7" [78339872-e21b-4348-9374-e13f9b6d4884] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 18:55:49.963665   19130 system_pods.go:89] "storage-provisioner" [03188f24-2d63-42be-9351-a533a36261f1] Running
	I1001 18:55:49.963672   19130 system_pods.go:126] duration metric: took 6.831591ms to wait for k8s-apps to be running ...
	I1001 18:55:49.963680   19130 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 18:55:49.963721   19130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 18:55:49.977922   19130 system_svc.go:56] duration metric: took 14.233798ms WaitForService to wait for kubelet
	I1001 18:55:49.977958   19130 kubeadm.go:582] duration metric: took 18.3109378s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 18:55:49.977977   19130 node_conditions.go:102] verifying NodePressure condition ...
	I1001 18:55:49.980894   19130 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 18:55:49.980926   19130 node_conditions.go:123] node cpu capacity is 2
	I1001 18:55:49.980946   19130 node_conditions.go:105] duration metric: took 2.963511ms to run NodePressure ...
	I1001 18:55:49.980961   19130 start.go:241] waiting for startup goroutines ...
	I1001 18:55:50.025756   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:50.026668   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:50.267669   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:50.304468   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:50.526075   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:50.526326   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:50.768807   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:50.805323   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:51.025572   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:51.026351   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:51.268074   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:51.305768   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:51.526059   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:51.526376   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:51.768501   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:51.805013   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:52.025541   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:52.025820   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:52.268174   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:52.305310   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:52.525865   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:52.526118   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:52.767743   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:52.804987   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:53.026311   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:53.026725   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:53.269220   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:53.305447   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:53.528776   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:53.529549   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:53.768687   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:53.805127   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:54.027297   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:54.027524   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:54.268151   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:54.305282   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:54.526062   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:54.526337   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:54.767500   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:54.804748   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:55.025675   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:55.026133   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:55.268404   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:55.304648   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:55.526329   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:55.527336   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:55.778761   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:55.874688   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:56.025330   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:56.026334   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:56.269082   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:56.305856   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:56.526451   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:56.528154   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:56.768201   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:56.805826   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:57.027121   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:57.027258   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:57.269172   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:57.304977   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:57.526351   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:57.526590   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:57.768978   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:57.805536   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:58.025659   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:58.026501   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:58.269001   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:58.305286   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:58.526084   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:58.526665   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:58.768429   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:58.804806   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:59.026094   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:59.026304   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:59.268418   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:59.304817   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:55:59.526515   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:55:59.526597   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:55:59.767811   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:55:59.805239   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:00.040806   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:00.041206   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:00.267008   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:00.306299   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:00.528191   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:00.528624   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:00.767791   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:00.805230   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:01.026832   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:01.026936   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:01.268009   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:01.305171   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:01.526717   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:01.526953   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:01.767418   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:01.805266   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:02.026936   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:02.027047   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:02.267842   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:02.305105   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:02.526845   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:02.526851   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:02.772693   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:02.807597   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:03.025499   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:03.026255   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:03.268684   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:03.304749   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:03.641187   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:03.641280   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:03.775598   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:03.804940   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:04.026931   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:04.027062   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:04.267296   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:04.305269   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:04.526274   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:04.526294   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:04.768554   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:04.805025   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:05.026457   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:05.027195   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:05.267602   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:05.304856   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:05.526109   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:05.526251   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:05.769032   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:05.804837   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:06.025451   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:06.026310   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:06.268089   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:06.305672   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:06.525242   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:06.526963   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:06.768305   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:06.805363   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:07.026589   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:07.026966   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:07.268173   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:07.304970   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:07.525623   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:07.525784   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:07.767875   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:07.804645   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:08.027996   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:08.029425   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:08.268902   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:08.304990   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:08.526506   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:08.527033   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:08.767956   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:08.805408   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:09.026790   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:09.026978   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:09.268334   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:09.304434   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:09.526629   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:09.526792   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:09.767687   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:09.804823   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:10.026143   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:10.026440   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:10.268627   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:10.305235   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:10.525163   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:10.526098   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:10.767674   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:10.805154   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:11.030249   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:11.030313   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:11.267963   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:11.305412   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:11.526600   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:11.526764   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:11.768337   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:11.805284   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:12.026818   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:12.027684   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:12.268085   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:12.304167   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:12.526893   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:12.527141   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:12.767499   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:12.805096   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:13.026871   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:13.027052   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:13.267903   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:13.304506   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:13.525481   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:13.525930   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:13.768076   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:13.805244   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:14.026136   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:14.026299   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:14.267893   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:14.305575   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:14.525873   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:14.526447   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:14.768188   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:14.805374   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:15.026766   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:15.027178   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:15.268704   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:15.305018   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:15.525416   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:15.526553   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:15.769012   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:15.804814   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:16.026829   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:16.027085   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:16.269425   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:16.305420   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:16.525190   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:16.526103   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:16.768230   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:16.804956   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:17.026689   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:17.027097   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:17.270837   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:17.305485   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:17.527106   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:17.527585   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:18.022760   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:18.023643   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:18.026758   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:18.028247   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:18.268167   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:18.305673   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:18.525578   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:18.526126   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:18.770456   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:18.804702   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:19.025774   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:19.026670   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:19.267283   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:19.304967   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:19.527181   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:19.527743   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:19.768368   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:19.804113   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:20.025739   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:20.026338   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:20.268440   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:20.304515   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:20.526543   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:20.526761   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:20.767899   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:20.805675   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:21.026967   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:21.028151   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:21.267897   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:21.304590   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:21.525719   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:21.527000   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:21.769464   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:21.868969   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:22.025930   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:22.026812   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:22.272213   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:22.305545   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:22.526241   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:22.526287   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:22.768226   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:22.804532   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:23.025816   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:23.026215   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:23.268463   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:23.304826   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:23.525776   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:23.526678   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:23.767547   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:23.805480   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:24.026894   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:24.027382   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:24.269916   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:24.305459   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:24.525644   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:24.527847   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:24.769044   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:24.805086   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:25.027057   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:25.027395   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:25.269294   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:25.304979   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:25.526357   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:25.527753   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:25.926826   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:25.927171   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:26.025449   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:26.026644   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:26.268268   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:26.306368   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:26.526496   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:26.526542   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:26.768258   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:26.804830   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:27.026617   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:56:27.027189   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:27.269102   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:27.310482   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:27.527333   19130 kapi.go:107] duration metric: took 48.005165013s to wait for kubernetes.io/minikube-addons=registry ...
	I1001 18:56:27.527541   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:27.768019   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:27.806508   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:28.028855   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:28.271037   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:28.314347   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:28.527469   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:28.769253   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:28.804846   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:29.026066   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:29.267391   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:29.304223   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:29.525839   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:29.770180   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:29.808249   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:30.028910   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:30.268603   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:30.312856   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:30.528793   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:30.769012   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:30.805914   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:31.025993   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:31.269924   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:31.304937   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:31.824225   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:31.824538   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:31.830941   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:32.025381   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:32.268065   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:32.305572   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:32.526896   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:32.768054   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:32.805263   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:33.030654   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:33.268552   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:33.304739   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:33.526564   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:33.768756   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:33.869132   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:34.025670   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:34.268822   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:34.305162   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:34.525913   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:34.767654   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:34.805279   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:35.025946   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:35.272489   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:35.304596   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:35.528585   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:35.768740   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:35.805002   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:36.027335   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:36.268329   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:36.304598   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:36.526023   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:36.770703   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:36.804740   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:37.026529   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:37.271591   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:37.371300   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:37.526128   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:37.767646   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:37.804783   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:38.025956   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:38.267638   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:38.304989   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:38.527026   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:38.767635   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:38.805865   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:39.025525   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:39.275749   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:39.309611   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:39.525778   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:39.768286   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:39.808791   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:40.034899   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:40.270158   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:40.306807   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:40.526250   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:40.768782   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:40.804708   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:41.025318   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:41.268327   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:41.305404   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:41.525848   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:41.767472   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:41.804589   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:42.025320   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:42.268125   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:42.305670   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:42.527778   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:42.767606   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:42.805074   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:43.026042   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:43.269285   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:43.304533   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:43.526139   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:43.767981   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:43.805325   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:44.029869   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:44.268527   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:56:44.305512   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:44.526188   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:44.770555   19130 kapi.go:107] duration metric: took 1m4.507269266s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1001 18:56:44.870080   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:45.027086   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:45.305406   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:45.525742   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:45.806902   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:46.026078   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:46.306409   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:46.526624   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:46.805889   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:47.027251   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:47.305110   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:47.526029   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:47.804997   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:48.025758   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:48.306321   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:48.526908   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:48.804640   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:49.025097   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:49.304456   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:49.525324   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:50.163855   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:50.164295   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:50.305538   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:50.525692   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:50.804560   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:51.025970   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:51.304754   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:51.527386   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:51.805556   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:52.025720   19130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:56:52.305190   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:52.526463   19130 kapi.go:107] duration metric: took 1m13.005117219s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1001 18:56:52.805311   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:53.369602   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:53.805211   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:54.306067   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:54.805885   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:55.306994   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:55.805664   19130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:56:56.312311   19130 kapi.go:107] duration metric: took 1m14.511016705s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1001 18:56:56.314047   19130 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-800266 cluster.
	I1001 18:56:56.315213   19130 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1001 18:56:56.316366   19130 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1001 18:56:56.317719   19130 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, nvidia-device-plugin, cloud-spanner, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1001 18:56:56.318879   19130 addons.go:510] duration metric: took 1m24.651803136s for enable addons: enabled=[ingress-dns storage-provisioner nvidia-device-plugin cloud-spanner storage-provisioner-rancher inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1001 18:56:56.318921   19130 start.go:246] waiting for cluster config update ...
	I1001 18:56:56.318939   19130 start.go:255] writing updated cluster config ...
	I1001 18:56:56.319187   19130 ssh_runner.go:195] Run: rm -f paused
	I1001 18:56:56.372853   19130 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 18:56:56.374326   19130 out.go:177] * Done! kubectl is now configured to use "addons-800266" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.483410167Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809823483381793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8095187a-777a-4bd0-a25a-cf4217e5f973 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.484042903Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47614f51-9c3c-426d-97ed-8e497029dddf name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.484109096Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47614f51-9c3c-426d-97ed-8e497029dddf name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.484557668Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88bf117e429d7ea4e9df0ce2b2849d84898c9bbc45328a89d02e9dcce9e2d110,PodSandboxId:cb3c6e9877a1118a62a4aebb1426b03b4aa55192480c2aea3e4bdf52162a43bc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727809692937304742,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-46nkk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4346a434-1efe-4fcc-aadf-751f61d32b31,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d39ddcb1b5f9aec812d43a7a677a4bdb517e00173cdd5c8a4e9b3e38f24efb67,PodSandboxId:2ab8e5f9df124108b664b6448e5fdb88387e2e454c9759c1dbdca7adce4481ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727809680350390423,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78001029-2e99-4c25-bac6-3c4d1c7efca3,},Annotations:map[string]string{i
o.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c060f2deb15ed4167efb8db5c219671004ab8d53470f30e0c3d7d653951f0a,PodSandboxId:10fe9643818d6d1f3a7a277d92e6efc4fbc30e5dd21871399dc5e79554e961e3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727809550424447318,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43c83fb0-f623-43ea-bc3c-91da7206fa2c,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850bea5323c259645b8f0e337f2d7756596d06da4dfe13ba3f7972eaca837ff0,PodSandboxId:9e6e8e9034d4e5aaa218d7d1d9c3bc0dbc125129322f313ae43c82560fd4203b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727808968959259879,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7mp6j,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f319c15f-c9b0-400d-89b5-d388e9a49218,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26504377c61b094c87f9c57dc6547187209d3397c940e127450701ed086d4170,PodSandboxId:e0fe8e6e2e03c67898468faddcb544c439feea381e7e5c4b053c35f24a62ba1d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727808937782393443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03188f24-2d63-42be-9351-a533a36261f1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588e2b860d1061d153b3e800d62e0681a5e7a74baada9e285edd8def6802801a,PodSandboxId:c94c530e7579b7788bbfa881f4333ef9b1b4e7a763807af4db7e658277e898f0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727808932951927591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-h656l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf425bf-e9a1-4f2b-98e3-38dc3f94625d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e9081f37c3fb61a5375d14bdb14a2c60017c1e8a63c43d60390321737cd070b,PodSandboxId:3383cc1410018df23bcb5aae6c0d4f0e26f5fb5ad129a65f48d09f587b7824d2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727808932050539867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9xtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d9fe6d-e5fb-43c9-b8d3-8075cec0186a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e78592209ec593f29721f4652b4edbdb5574f343b6c6d59e5bc1b4ec8ddb5e,PodSandboxId:affc68d28d7cc11dc7b2fdd3f98016b29c1b381ed0b0e67c0baf603398373f07,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8
d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727808921169082687,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e31f2ac286141c3c6cb5bc1d1fd9d8d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f69e4bbfce257ab72b58b6725f7ad1549dbfb02a122f66601536180d27ad34a,PodSandboxId:41c2b5c7ce0b8532e9454993a076bc07bb256a0e21c900aea5d34c63ab149409,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,Create
dAt:1727808921152193749,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88c118d760841c2b05582d2c66532469,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42a255e28d30da9b9b571370bb7f475734b4e820e95dcfc7e08e3366164272b,PodSandboxId:c3f4e55c0d2f6b266914b9bde04ea61c23e795b7087c92230f699f4f7dd675c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727
808921090475270,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51328b0912537964eeb48bb5e91ec731,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868f38fe5a25463a1bdbe6eaafc9fdd61fcd07ad2bcc794f9562d0b8dd1b2c67,PodSandboxId:5cd1a5dc883d0f27ba6f9dcdbe48ab65759faa0e4b09187ecc0d83cd2064c461,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172
7808921071073904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0368aed84826471dbccaebb4039370c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47614f51-9c3c-426d-97ed-8e497029dddf name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.542872480Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36a1aa27-a3ab-401d-9432-28ac262e9d65 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.542975202Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36a1aa27-a3ab-401d-9432-28ac262e9d65 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.546607027Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5fd01a9a-c189-4350-be4d-f050d9995b3b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.547855490Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809823547822362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5fd01a9a-c189-4350-be4d-f050d9995b3b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.548604035Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5527f8e8-230f-4a1e-b7d1-4f7539f5197b name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.548682697Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5527f8e8-230f-4a1e-b7d1-4f7539f5197b name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.549040783Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88bf117e429d7ea4e9df0ce2b2849d84898c9bbc45328a89d02e9dcce9e2d110,PodSandboxId:cb3c6e9877a1118a62a4aebb1426b03b4aa55192480c2aea3e4bdf52162a43bc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727809692937304742,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-46nkk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4346a434-1efe-4fcc-aadf-751f61d32b31,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d39ddcb1b5f9aec812d43a7a677a4bdb517e00173cdd5c8a4e9b3e38f24efb67,PodSandboxId:2ab8e5f9df124108b664b6448e5fdb88387e2e454c9759c1dbdca7adce4481ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727809680350390423,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78001029-2e99-4c25-bac6-3c4d1c7efca3,},Annotations:map[string]string{i
o.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c060f2deb15ed4167efb8db5c219671004ab8d53470f30e0c3d7d653951f0a,PodSandboxId:10fe9643818d6d1f3a7a277d92e6efc4fbc30e5dd21871399dc5e79554e961e3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727809550424447318,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43c83fb0-f623-43ea-bc3c-91da7206fa2c,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850bea5323c259645b8f0e337f2d7756596d06da4dfe13ba3f7972eaca837ff0,PodSandboxId:9e6e8e9034d4e5aaa218d7d1d9c3bc0dbc125129322f313ae43c82560fd4203b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727808968959259879,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7mp6j,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f319c15f-c9b0-400d-89b5-d388e9a49218,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26504377c61b094c87f9c57dc6547187209d3397c940e127450701ed086d4170,PodSandboxId:e0fe8e6e2e03c67898468faddcb544c439feea381e7e5c4b053c35f24a62ba1d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727808937782393443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03188f24-2d63-42be-9351-a533a36261f1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588e2b860d1061d153b3e800d62e0681a5e7a74baada9e285edd8def6802801a,PodSandboxId:c94c530e7579b7788bbfa881f4333ef9b1b4e7a763807af4db7e658277e898f0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727808932951927591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-h656l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf425bf-e9a1-4f2b-98e3-38dc3f94625d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e9081f37c3fb61a5375d14bdb14a2c60017c1e8a63c43d60390321737cd070b,PodSandboxId:3383cc1410018df23bcb5aae6c0d4f0e26f5fb5ad129a65f48d09f587b7824d2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727808932050539867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9xtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d9fe6d-e5fb-43c9-b8d3-8075cec0186a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e78592209ec593f29721f4652b4edbdb5574f343b6c6d59e5bc1b4ec8ddb5e,PodSandboxId:affc68d28d7cc11dc7b2fdd3f98016b29c1b381ed0b0e67c0baf603398373f07,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8
d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727808921169082687,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e31f2ac286141c3c6cb5bc1d1fd9d8d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f69e4bbfce257ab72b58b6725f7ad1549dbfb02a122f66601536180d27ad34a,PodSandboxId:41c2b5c7ce0b8532e9454993a076bc07bb256a0e21c900aea5d34c63ab149409,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,Create
dAt:1727808921152193749,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88c118d760841c2b05582d2c66532469,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42a255e28d30da9b9b571370bb7f475734b4e820e95dcfc7e08e3366164272b,PodSandboxId:c3f4e55c0d2f6b266914b9bde04ea61c23e795b7087c92230f699f4f7dd675c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727
808921090475270,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51328b0912537964eeb48bb5e91ec731,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868f38fe5a25463a1bdbe6eaafc9fdd61fcd07ad2bcc794f9562d0b8dd1b2c67,PodSandboxId:5cd1a5dc883d0f27ba6f9dcdbe48ab65759faa0e4b09187ecc0d83cd2064c461,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172
7808921071073904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0368aed84826471dbccaebb4039370c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5527f8e8-230f-4a1e-b7d1-4f7539f5197b name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.602507278Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e41d149-0b28-4a97-9f61-fbc990b375a2 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.602591800Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e41d149-0b28-4a97-9f61-fbc990b375a2 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.603961379Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f13504e1-e0ea-4e48-8aa4-864ef3644a5e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.605373432Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809823605343025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f13504e1-e0ea-4e48-8aa4-864ef3644a5e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.606133906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2325d25d-e0a6-40a3-8710-ec29da2c8db0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.606199064Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2325d25d-e0a6-40a3-8710-ec29da2c8db0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.606455530Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88bf117e429d7ea4e9df0ce2b2849d84898c9bbc45328a89d02e9dcce9e2d110,PodSandboxId:cb3c6e9877a1118a62a4aebb1426b03b4aa55192480c2aea3e4bdf52162a43bc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727809692937304742,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-46nkk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4346a434-1efe-4fcc-aadf-751f61d32b31,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d39ddcb1b5f9aec812d43a7a677a4bdb517e00173cdd5c8a4e9b3e38f24efb67,PodSandboxId:2ab8e5f9df124108b664b6448e5fdb88387e2e454c9759c1dbdca7adce4481ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727809680350390423,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78001029-2e99-4c25-bac6-3c4d1c7efca3,},Annotations:map[string]string{i
o.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c060f2deb15ed4167efb8db5c219671004ab8d53470f30e0c3d7d653951f0a,PodSandboxId:10fe9643818d6d1f3a7a277d92e6efc4fbc30e5dd21871399dc5e79554e961e3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727809550424447318,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43c83fb0-f623-43ea-bc3c-91da7206fa2c,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850bea5323c259645b8f0e337f2d7756596d06da4dfe13ba3f7972eaca837ff0,PodSandboxId:9e6e8e9034d4e5aaa218d7d1d9c3bc0dbc125129322f313ae43c82560fd4203b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727808968959259879,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7mp6j,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f319c15f-c9b0-400d-89b5-d388e9a49218,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26504377c61b094c87f9c57dc6547187209d3397c940e127450701ed086d4170,PodSandboxId:e0fe8e6e2e03c67898468faddcb544c439feea381e7e5c4b053c35f24a62ba1d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727808937782393443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03188f24-2d63-42be-9351-a533a36261f1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588e2b860d1061d153b3e800d62e0681a5e7a74baada9e285edd8def6802801a,PodSandboxId:c94c530e7579b7788bbfa881f4333ef9b1b4e7a763807af4db7e658277e898f0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727808932951927591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-h656l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf425bf-e9a1-4f2b-98e3-38dc3f94625d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e9081f37c3fb61a5375d14bdb14a2c60017c1e8a63c43d60390321737cd070b,PodSandboxId:3383cc1410018df23bcb5aae6c0d4f0e26f5fb5ad129a65f48d09f587b7824d2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727808932050539867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9xtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d9fe6d-e5fb-43c9-b8d3-8075cec0186a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e78592209ec593f29721f4652b4edbdb5574f343b6c6d59e5bc1b4ec8ddb5e,PodSandboxId:affc68d28d7cc11dc7b2fdd3f98016b29c1b381ed0b0e67c0baf603398373f07,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8
d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727808921169082687,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e31f2ac286141c3c6cb5bc1d1fd9d8d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f69e4bbfce257ab72b58b6725f7ad1549dbfb02a122f66601536180d27ad34a,PodSandboxId:41c2b5c7ce0b8532e9454993a076bc07bb256a0e21c900aea5d34c63ab149409,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,Create
dAt:1727808921152193749,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88c118d760841c2b05582d2c66532469,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42a255e28d30da9b9b571370bb7f475734b4e820e95dcfc7e08e3366164272b,PodSandboxId:c3f4e55c0d2f6b266914b9bde04ea61c23e795b7087c92230f699f4f7dd675c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727
808921090475270,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51328b0912537964eeb48bb5e91ec731,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868f38fe5a25463a1bdbe6eaafc9fdd61fcd07ad2bcc794f9562d0b8dd1b2c67,PodSandboxId:5cd1a5dc883d0f27ba6f9dcdbe48ab65759faa0e4b09187ecc0d83cd2064c461,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172
7808921071073904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0368aed84826471dbccaebb4039370c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2325d25d-e0a6-40a3-8710-ec29da2c8db0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.649388939Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e349a9e6-f99d-46e5-8cc6-f8953590b554 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.649465453Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e349a9e6-f99d-46e5-8cc6-f8953590b554 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.650771402Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2078406f-db0b-42aa-a18a-4eaa22cda81e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.652132963Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809823652106862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2078406f-db0b-42aa-a18a-4eaa22cda81e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.653072325Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=919435d7-3529-4f91-b48a-8cf4c56eed32 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.653139948Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=919435d7-3529-4f91-b48a-8cf4c56eed32 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:10:23 addons-800266 crio[664]: time="2024-10-01 19:10:23.653395269Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88bf117e429d7ea4e9df0ce2b2849d84898c9bbc45328a89d02e9dcce9e2d110,PodSandboxId:cb3c6e9877a1118a62a4aebb1426b03b4aa55192480c2aea3e4bdf52162a43bc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727809692937304742,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-46nkk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4346a434-1efe-4fcc-aadf-751f61d32b31,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d39ddcb1b5f9aec812d43a7a677a4bdb517e00173cdd5c8a4e9b3e38f24efb67,PodSandboxId:2ab8e5f9df124108b664b6448e5fdb88387e2e454c9759c1dbdca7adce4481ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727809680350390423,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78001029-2e99-4c25-bac6-3c4d1c7efca3,},Annotations:map[string]string{i
o.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c060f2deb15ed4167efb8db5c219671004ab8d53470f30e0c3d7d653951f0a,PodSandboxId:10fe9643818d6d1f3a7a277d92e6efc4fbc30e5dd21871399dc5e79554e961e3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727809550424447318,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43c83fb0-f623-43ea-bc3c-91da7206fa2c,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850bea5323c259645b8f0e337f2d7756596d06da4dfe13ba3f7972eaca837ff0,PodSandboxId:9e6e8e9034d4e5aaa218d7d1d9c3bc0dbc125129322f313ae43c82560fd4203b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727808968959259879,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7mp6j,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f319c15f-c9b0-400d-89b5-d388e9a49218,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26504377c61b094c87f9c57dc6547187209d3397c940e127450701ed086d4170,PodSandboxId:e0fe8e6e2e03c67898468faddcb544c439feea381e7e5c4b053c35f24a62ba1d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727808937782393443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03188f24-2d63-42be-9351-a533a36261f1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588e2b860d1061d153b3e800d62e0681a5e7a74baada9e285edd8def6802801a,PodSandboxId:c94c530e7579b7788bbfa881f4333ef9b1b4e7a763807af4db7e658277e898f0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727808932951927591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-h656l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf425bf-e9a1-4f2b-98e3-38dc3f94625d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e9081f37c3fb61a5375d14bdb14a2c60017c1e8a63c43d60390321737cd070b,PodSandboxId:3383cc1410018df23bcb5aae6c0d4f0e26f5fb5ad129a65f48d09f587b7824d2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727808932050539867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9xtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d9fe6d-e5fb-43c9-b8d3-8075cec0186a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e78592209ec593f29721f4652b4edbdb5574f343b6c6d59e5bc1b4ec8ddb5e,PodSandboxId:affc68d28d7cc11dc7b2fdd3f98016b29c1b381ed0b0e67c0baf603398373f07,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8
d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727808921169082687,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e31f2ac286141c3c6cb5bc1d1fd9d8d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f69e4bbfce257ab72b58b6725f7ad1549dbfb02a122f66601536180d27ad34a,PodSandboxId:41c2b5c7ce0b8532e9454993a076bc07bb256a0e21c900aea5d34c63ab149409,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,Create
dAt:1727808921152193749,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88c118d760841c2b05582d2c66532469,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42a255e28d30da9b9b571370bb7f475734b4e820e95dcfc7e08e3366164272b,PodSandboxId:c3f4e55c0d2f6b266914b9bde04ea61c23e795b7087c92230f699f4f7dd675c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727
808921090475270,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51328b0912537964eeb48bb5e91ec731,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868f38fe5a25463a1bdbe6eaafc9fdd61fcd07ad2bcc794f9562d0b8dd1b2c67,PodSandboxId:5cd1a5dc883d0f27ba6f9dcdbe48ab65759faa0e4b09187ecc0d83cd2064c461,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172
7808921071073904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-800266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0368aed84826471dbccaebb4039370c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=919435d7-3529-4f91-b48a-8cf4c56eed32 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	88bf117e429d7       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   cb3c6e9877a11       hello-world-app-55bf9c44b4-46nkk
	d39ddcb1b5f9a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     2 minutes ago       Running             busybox                   0                   2ab8e5f9df124       busybox
	f2c060f2deb15       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         4 minutes ago       Running             nginx                     0                   10fe9643818d6       nginx
	850bea5323c25       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   14 minutes ago      Running             metrics-server            0                   9e6e8e9034d4e       metrics-server-84c5f94fbc-7mp6j
	26504377c61b0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        14 minutes ago      Running             storage-provisioner       0                   e0fe8e6e2e03c       storage-provisioner
	588e2b860d106       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        14 minutes ago      Running             coredns                   0                   c94c530e7579b       coredns-7c65d6cfc9-h656l
	7e9081f37c3fb       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        14 minutes ago      Running             kube-proxy                0                   3383cc1410018       kube-proxy-x9xtt
	f2e78592209ec       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        15 minutes ago      Running             etcd                      0                   affc68d28d7cc       etcd-addons-800266
	1f69e4bbfce25       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        15 minutes ago      Running             kube-scheduler            0                   41c2b5c7ce0b8       kube-scheduler-addons-800266
	f42a255e28d30       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        15 minutes ago      Running             kube-controller-manager   0                   c3f4e55c0d2f6       kube-controller-manager-addons-800266
	868f38fe5a254       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        15 minutes ago      Running             kube-apiserver            0                   5cd1a5dc883d0       kube-apiserver-addons-800266
	
	
	==> coredns [588e2b860d1061d153b3e800d62e0681a5e7a74baada9e285edd8def6802801a] <==
	[INFO] 10.244.0.20:46843 - 61814 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000090965s
	[INFO] 10.244.0.20:44583 - 58401 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000120948s
	[INFO] 10.244.0.20:44583 - 19798 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000086333s
	[INFO] 10.244.0.20:46843 - 41687 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000136831s
	[INFO] 10.244.0.20:44583 - 2271 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036434s
	[INFO] 10.244.0.20:46843 - 33164 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00011028s
	[INFO] 10.244.0.20:44583 - 22972 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000062154s
	[INFO] 10.244.0.20:44583 - 4795 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040824s
	[INFO] 10.244.0.20:44583 - 10567 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037771s
	[INFO] 10.244.0.20:46843 - 62325 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000121843s
	[INFO] 10.244.0.20:44583 - 19172 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000058181s
	[INFO] 10.244.0.20:34311 - 58089 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000099816s
	[INFO] 10.244.0.20:34311 - 49475 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00008457s
	[INFO] 10.244.0.20:45463 - 44865 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000120542s
	[INFO] 10.244.0.20:45463 - 62169 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000041781s
	[INFO] 10.244.0.20:34311 - 13219 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00004567s
	[INFO] 10.244.0.20:45463 - 14847 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000230625s
	[INFO] 10.244.0.20:34311 - 24406 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000064742s
	[INFO] 10.244.0.20:45463 - 59398 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000111146s
	[INFO] 10.244.0.20:45463 - 34652 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036335s
	[INFO] 10.244.0.20:45463 - 49152 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000028975s
	[INFO] 10.244.0.20:34311 - 26853 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000098359s
	[INFO] 10.244.0.20:45463 - 55546 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000031588s
	[INFO] 10.244.0.20:34311 - 15273 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000092423s
	[INFO] 10.244.0.20:34311 - 59912 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000061915s
	
	
	==> describe nodes <==
	Name:               addons-800266
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-800266
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=addons-800266
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T18_55_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-800266
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 18:55:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-800266
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:10:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:08:31 +0000   Tue, 01 Oct 2024 18:55:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:08:31 +0000   Tue, 01 Oct 2024 18:55:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:08:31 +0000   Tue, 01 Oct 2024 18:55:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:08:31 +0000   Tue, 01 Oct 2024 18:55:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.56
	  Hostname:    addons-800266
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 e369f6d6d9654b1f858197dec59d1591
	  System UUID:                e369f6d6-d965-4b1f-8581-97dec59d1591
	  Boot ID:                    e7e1b035-60f6-4998-aa54-57f01ff745eb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-46nkk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 coredns-7c65d6cfc9-h656l                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     14m
	  kube-system                 etcd-addons-800266                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         14m
	  kube-system                 kube-apiserver-addons-800266             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-800266    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-x9xtt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-800266             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-84c5f94fbc-7mp6j          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         14m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node addons-800266 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node addons-800266 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node addons-800266 status is now: NodeHasSufficientPID
	  Normal  NodeReady                14m   kubelet          Node addons-800266 status is now: NodeReady
	  Normal  RegisteredNode           14m   node-controller  Node addons-800266 event: Registered Node addons-800266 in Controller
	
	
	==> dmesg <==
	[  +5.588643] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.358343] systemd-fstab-generator[1496]: Ignoring "noauto" option for root device
	[  +4.644593] kauditd_printk_skb: 113 callbacks suppressed
	[  +5.069667] kauditd_printk_skb: 148 callbacks suppressed
	[  +7.554187] kauditd_printk_skb: 53 callbacks suppressed
	[Oct 1 18:56] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.717659] kauditd_printk_skb: 29 callbacks suppressed
	[ +11.678742] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.923509] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.214391] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.927317] kauditd_printk_skb: 15 callbacks suppressed
	[  +8.139816] kauditd_printk_skb: 6 callbacks suppressed
	[Oct 1 18:57] kauditd_printk_skb: 6 callbacks suppressed
	[Oct 1 19:05] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.021279] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.610237] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.377026] kauditd_printk_skb: 39 callbacks suppressed
	[  +6.050567] kauditd_printk_skb: 49 callbacks suppressed
	[  +6.215791] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.048791] kauditd_printk_skb: 9 callbacks suppressed
	[  +9.611037] kauditd_printk_skb: 18 callbacks suppressed
	[Oct 1 19:06] kauditd_printk_skb: 15 callbacks suppressed
	[ +18.938698] kauditd_printk_skb: 49 callbacks suppressed
	[Oct 1 19:07] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 1 19:08] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [f2e78592209ec593f29721f4652b4edbdb5574f343b6c6d59e5bc1b4ec8ddb5e] <==
	{"level":"warn","ts":"2024-10-01T18:56:50.150296Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.127176ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T18:56:50.150332Z","caller":"traceutil/trace.go:171","msg":"trace[150706041] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1079; }","duration":"136.164037ms","start":"2024-10-01T18:56:50.014162Z","end":"2024-10-01T18:56:50.150326Z","steps":["trace[150706041] 'agreement among raft nodes before linearized reading'  (duration: 136.113254ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T18:56:50.150349Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"275.998928ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-01T18:56:50.150370Z","caller":"traceutil/trace.go:171","msg":"trace[217688362] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1079; }","duration":"276.021502ms","start":"2024-10-01T18:56:49.874341Z","end":"2024-10-01T18:56:50.150363Z","steps":["trace[217688362] 'agreement among raft nodes before linearized reading'  (duration: 275.981089ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T18:56:50.150442Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"356.948245ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T18:56:50.150467Z","caller":"traceutil/trace.go:171","msg":"trace[2015084443] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1079; }","duration":"356.968244ms","start":"2024-10-01T18:56:49.793489Z","end":"2024-10-01T18:56:50.150457Z","steps":["trace[2015084443] 'agreement among raft nodes before linearized reading'  (duration: 356.929735ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T18:56:50.150489Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T18:56:49.793456Z","time spent":"357.027743ms","remote":"127.0.0.1:60426","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-01T18:56:55.772413Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.764463ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T18:56:55.772659Z","caller":"traceutil/trace.go:171","msg":"trace[191106317] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1099; }","duration":"216.022989ms","start":"2024-10-01T18:56:55.556616Z","end":"2024-10-01T18:56:55.772639Z","steps":["trace[191106317] 'range keys from in-memory index tree'  (duration: 215.720422ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T19:05:15.650658Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T19:05:15.286030Z","time spent":"364.616663ms","remote":"127.0.0.1:60278","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-10-01T19:05:22.177193Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1466}
	{"level":"info","ts":"2024-10-01T19:05:22.209956Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1466,"took":"32.253715ms","hash":4096953151,"current-db-size-bytes":6348800,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":3366912,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-10-01T19:05:22.210064Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4096953151,"revision":1466,"compact-revision":-1}
	{"level":"info","ts":"2024-10-01T19:05:47.501580Z","caller":"traceutil/trace.go:171","msg":"trace[217967233] linearizableReadLoop","detail":"{readStateIndex:2360; appliedIndex:2359; }","duration":"130.145136ms","start":"2024-10-01T19:05:47.371407Z","end":"2024-10-01T19:05:47.501552Z","steps":["trace[217967233] 'read index received'  (duration: 130.00794ms)","trace[217967233] 'applied index is now lower than readState.Index'  (duration: 136.393µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-01T19:05:47.501689Z","caller":"traceutil/trace.go:171","msg":"trace[61397452] transaction","detail":"{read_only:false; response_revision:2206; number_of_response:1; }","duration":"182.775156ms","start":"2024-10-01T19:05:47.318904Z","end":"2024-10-01T19:05:47.501679Z","steps":["trace[61397452] 'process raft request'  (duration: 182.527494ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T19:05:47.501878Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.44547ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T19:05:47.501910Z","caller":"traceutil/trace.go:171","msg":"trace[1589703985] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2206; }","duration":"130.519248ms","start":"2024-10-01T19:05:47.371385Z","end":"2024-10-01T19:05:47.501904Z","steps":["trace[1589703985] 'agreement among raft nodes before linearized reading'  (duration: 130.415027ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T19:06:13.366447Z","caller":"traceutil/trace.go:171","msg":"trace[368892018] linearizableReadLoop","detail":"{readStateIndex:2577; appliedIndex:2576; }","duration":"275.996881ms","start":"2024-10-01T19:06:13.090437Z","end":"2024-10-01T19:06:13.366434Z","steps":["trace[368892018] 'read index received'  (duration: 275.842966ms)","trace[368892018] 'applied index is now lower than readState.Index'  (duration: 153.241µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-01T19:06:13.366621Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"276.166839ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/csi-resizer\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T19:06:13.366665Z","caller":"traceutil/trace.go:171","msg":"trace[1039750024] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-resizer; range_end:; response_count:0; response_revision:2413; }","duration":"276.224657ms","start":"2024-10-01T19:06:13.090434Z","end":"2024-10-01T19:06:13.366658Z","steps":["trace[1039750024] 'agreement among raft nodes before linearized reading'  (duration: 276.150291ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T19:06:13.366680Z","caller":"traceutil/trace.go:171","msg":"trace[1223254419] transaction","detail":"{read_only:false; response_revision:2413; number_of_response:1; }","duration":"305.736279ms","start":"2024-10-01T19:06:13.060931Z","end":"2024-10-01T19:06:13.366668Z","steps":["trace[1223254419] 'process raft request'  (duration: 305.400048ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T19:06:13.366861Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T19:06:13.060916Z","time spent":"305.865431ms","remote":"127.0.0.1:60410","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:2406 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-10-01T19:10:22.184963Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1996}
	{"level":"info","ts":"2024-10-01T19:10:22.207233Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1996,"took":"21.653649ms","hash":2483175405,"current-db-size-bytes":6348800,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":4734976,"current-db-size-in-use":"4.7 MB"}
	{"level":"info","ts":"2024-10-01T19:10:22.207354Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2483175405,"revision":1996,"compact-revision":1466}
	
	
	==> kernel <==
	 19:10:23 up 15 min,  0 users,  load average: 0.04, 0.22, 0.26
	Linux addons-800266 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [868f38fe5a25463a1bdbe6eaafc9fdd61fcd07ad2bcc794f9562d0b8dd1b2c67] <==
	E1001 18:57:17.317310       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.79.139:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.79.139:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.79.139:443: connect: connection refused" logger="UnhandledError"
	E1001 18:57:17.340366       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.79.139:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.79.139:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.79.139:443: connect: connection refused" logger="UnhandledError"
	I1001 18:57:17.446602       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1001 19:05:10.759972       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.11.198"}
	I1001 19:05:39.912463       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1001 19:05:41.037899       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1001 19:05:45.764131       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1001 19:05:46.072362       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.218.37"}
	E1001 19:05:47.308520       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1001 19:05:54.444889       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1001 19:06:09.063802       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 19:06:09.064008       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 19:06:09.082635       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 19:06:09.082810       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 19:06:09.112033       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 19:06:09.112130       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 19:06:09.122674       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 19:06:09.122797       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 19:06:09.153120       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 19:06:09.153550       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1001 19:06:10.112407       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1001 19:06:10.153605       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1001 19:06:10.265072       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1001 19:08:09.963064       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.125.46"}
	E1001 19:08:13.851869       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	
	
	==> kube-controller-manager [f42a255e28d30da9b9b571370bb7f475734b4e820e95dcfc7e08e3366164272b] <==
	W1001 19:08:25.149104       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 19:08:25.149238       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1001 19:08:31.078842       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-800266"
	W1001 19:08:41.559344       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 19:08:41.559553       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 19:08:58.439662       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 19:08:58.439967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 19:09:03.109484       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 19:09:03.109680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 19:09:06.752308       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 19:09:06.752492       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 19:09:32.727017       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 19:09:32.727195       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 19:09:37.443309       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 19:09:37.443401       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 19:09:39.012137       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 19:09:39.012188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 19:09:45.584572       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 19:09:45.584645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 19:10:08.188061       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 19:10:08.188188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 19:10:08.434299       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 19:10:08.434433       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 19:10:22.197340       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 19:10:22.197389       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [7e9081f37c3fb61a5375d14bdb14a2c60017c1e8a63c43d60390321737cd070b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 18:55:32.753592       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 18:55:32.764841       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.56"]
	E1001 18:55:32.764945       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 18:55:32.883864       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 18:55:32.883937       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 18:55:32.883970       1 server_linux.go:169] "Using iptables Proxier"
	I1001 18:55:32.886953       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 18:55:32.887235       1 server.go:483] "Version info" version="v1.31.1"
	I1001 18:55:32.887246       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 18:55:32.888643       1 config.go:199] "Starting service config controller"
	I1001 18:55:32.888665       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 18:55:32.888747       1 config.go:105] "Starting endpoint slice config controller"
	I1001 18:55:32.888765       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 18:55:32.889264       1 config.go:328] "Starting node config controller"
	I1001 18:55:32.889285       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 18:55:32.988968       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 18:55:32.989039       1 shared_informer.go:320] Caches are synced for service config
	I1001 18:55:32.990803       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1f69e4bbfce257ab72b58b6725f7ad1549dbfb02a122f66601536180d27ad34a] <==
	W1001 18:55:23.554557       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 18:55:23.554597       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1001 18:55:23.555016       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1001 18:55:23.555076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 18:55:23.555326       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 18:55:23.555355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 18:55:23.555431       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1001 18:55:23.555462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 18:55:24.361365       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 18:55:24.361410       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1001 18:55:24.377658       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1001 18:55:24.377812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 18:55:24.466113       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1001 18:55:24.466234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 18:55:24.481156       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1001 18:55:24.481267       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 18:55:24.490703       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1001 18:55:24.490779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 18:55:24.546375       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1001 18:55:24.546563       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 18:55:24.858427       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1001 18:55:24.858904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 18:55:24.919940       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1001 18:55:24.919984       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1001 18:55:27.931251       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 01 19:08:46 addons-800266 kubelet[1206]: E1001 19:08:46.496241    1206 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809726495380950,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:08:56 addons-800266 kubelet[1206]: E1001 19:08:56.499315    1206 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809736498974905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:08:56 addons-800266 kubelet[1206]: E1001 19:08:56.499359    1206 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809736498974905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:09:06 addons-800266 kubelet[1206]: E1001 19:09:06.502469    1206 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809746502133313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:09:06 addons-800266 kubelet[1206]: E1001 19:09:06.502884    1206 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809746502133313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:09:16 addons-800266 kubelet[1206]: E1001 19:09:16.505941    1206 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809756505324481,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:09:16 addons-800266 kubelet[1206]: E1001 19:09:16.506256    1206 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809756505324481,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:09:26 addons-800266 kubelet[1206]: E1001 19:09:26.136903    1206 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 19:09:26 addons-800266 kubelet[1206]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 19:09:26 addons-800266 kubelet[1206]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 19:09:26 addons-800266 kubelet[1206]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 19:09:26 addons-800266 kubelet[1206]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 19:09:26 addons-800266 kubelet[1206]: E1001 19:09:26.509447    1206 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809766509118040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:09:26 addons-800266 kubelet[1206]: E1001 19:09:26.509559    1206 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809766509118040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:09:27 addons-800266 kubelet[1206]: I1001 19:09:27.118297    1206 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 01 19:09:36 addons-800266 kubelet[1206]: E1001 19:09:36.512912    1206 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809776512324505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:09:36 addons-800266 kubelet[1206]: E1001 19:09:36.512990    1206 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809776512324505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:09:46 addons-800266 kubelet[1206]: E1001 19:09:46.515832    1206 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809786515485954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:09:46 addons-800266 kubelet[1206]: E1001 19:09:46.515905    1206 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809786515485954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:09:56 addons-800266 kubelet[1206]: E1001 19:09:56.519862    1206 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809796518643769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:09:56 addons-800266 kubelet[1206]: E1001 19:09:56.520229    1206 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809796518643769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:10:06 addons-800266 kubelet[1206]: E1001 19:10:06.523486    1206 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809806523118815,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:10:06 addons-800266 kubelet[1206]: E1001 19:10:06.523547    1206 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809806523118815,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:10:16 addons-800266 kubelet[1206]: E1001 19:10:16.532136    1206 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809816527488780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:10:16 addons-800266 kubelet[1206]: E1001 19:10:16.532192    1206 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727809816527488780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [26504377c61b094c87f9c57dc6547187209d3397c940e127450701ed086d4170] <==
	I1001 18:55:38.925001       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 18:55:39.164951       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 18:55:39.189198       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 18:55:39.234001       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 18:55:39.234143       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-800266_c0ffc0e5-d926-445b-9d38-54d07d6e5c0b!
	I1001 18:55:39.246031       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5ebfcb6f-6d49-4e2c-894f-b9d92e850914", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-800266_c0ffc0e5-d926-445b-9d38-54d07d6e5c0b became leader
	I1001 18:55:39.435277       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-800266_c0ffc0e5-d926-445b-9d38-54d07d6e5c0b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-800266 -n addons-800266
helpers_test.go:261: (dbg) Run:  kubectl --context addons-800266 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-800266 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (315.42s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-800266
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-800266: exit status 82 (2m0.486131361s)

                                                
                                                
-- stdout --
	* Stopping node "addons-800266"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-800266" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-800266
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-800266: exit status 11 (21.46297609s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-800266" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-800266
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-800266: exit status 11 (6.14481881s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-800266" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-800266
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-800266: exit status 11 (6.143491395s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-800266" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (190.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [79f8bf30-f1ae-4885-92cb-e89e9b0e59df] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004026351s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-338309 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-338309 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-338309 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-338309 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-338309 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3c099e58-c5ad-424a-8b34-6e998b59823c] Pending
helpers_test.go:344: "sp-pod" [3c099e58-c5ad-424a-8b34-6e998b59823c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-338309 -n functional-338309
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2024-10-01 19:19:44.590376433 +0000 UTC m=+1535.539179827
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-338309 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-338309 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-338309/192.168.50.74
Start Time:       Tue, 01 Oct 2024 19:16:44 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ContainerCreating
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8pnb5 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   False 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-8pnb5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type    Reason     Age   From               Message
----    ------     ----  ----               -------
Normal  Scheduled  3m    default-scheduler  Successfully assigned default/sp-pod to functional-338309
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-338309 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-338309 logs sp-pod -n default: exit status 1 (69.093325ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-338309 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-338309 -n functional-338309
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-338309 logs -n 25: (1.40254019s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-338309                                                        | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC | 01 Oct 24 19:17 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-338309                                                        | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC | 01 Oct 24 19:17 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| image          | functional-338309                                                        | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC | 01 Oct 24 19:17 UTC |
	|                | image ls --format short                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-338309                                                        | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC | 01 Oct 24 19:17 UTC |
	|                | image ls --format yaml                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh            | functional-338309 ssh pgrep                                              | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC |                     |
	|                | buildkitd                                                                |                   |         |         |                     |                     |
	| image          | functional-338309 image build -t                                         | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC | 01 Oct 24 19:17 UTC |
	|                | localhost/my-image:functional-338309                                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                         |                   |         |         |                     |                     |
	| ssh            | functional-338309 ssh stat                                               | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC | 01 Oct 24 19:17 UTC |
	|                | /mount-9p/created-by-test                                                |                   |         |         |                     |                     |
	| ssh            | functional-338309 ssh stat                                               | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC | 01 Oct 24 19:17 UTC |
	|                | /mount-9p/created-by-pod                                                 |                   |         |         |                     |                     |
	| ssh            | functional-338309 ssh sudo                                               | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC | 01 Oct 24 19:17 UTC |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| ssh            | functional-338309 ssh findmnt                                            | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC |                     |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-338309                                                     | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdspecific-port2509273301/001:/mount-9p |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| image          | functional-338309 image ls                                               | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC | 01 Oct 24 19:17 UTC |
	| image          | functional-338309                                                        | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC | 01 Oct 24 19:17 UTC |
	|                | image ls --format json                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh            | functional-338309 ssh findmnt                                            | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC | 01 Oct 24 19:17 UTC |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| image          | functional-338309                                                        | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC | 01 Oct 24 19:17 UTC |
	|                | image ls --format table                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh            | functional-338309 ssh -- ls                                              | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC | 01 Oct 24 19:17 UTC |
	|                | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh            | functional-338309 ssh sudo                                               | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC |                     |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount          | -p functional-338309                                                     | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3015727586/001:/mount2   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh            | functional-338309 ssh findmnt                                            | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC |                     |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-338309                                                     | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3015727586/001:/mount3   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-338309                                                     | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3015727586/001:/mount1   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh            | functional-338309 ssh findmnt                                            | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC | 01 Oct 24 19:17 UTC |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-338309 ssh findmnt                                            | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC | 01 Oct 24 19:17 UTC |
	|                | -T /mount2                                                               |                   |         |         |                     |                     |
	| ssh            | functional-338309 ssh findmnt                                            | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC | 01 Oct 24 19:17 UTC |
	|                | -T /mount3                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-338309                                                     | functional-338309 | jenkins | v1.34.0 | 01 Oct 24 19:17 UTC |                     |
	|                | --kill=true                                                              |                   |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 19:17:00
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 19:17:00.781991   29076 out.go:345] Setting OutFile to fd 1 ...
	I1001 19:17:00.782091   29076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:17:00.782096   29076 out.go:358] Setting ErrFile to fd 2...
	I1001 19:17:00.782099   29076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:17:00.782378   29076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 19:17:00.782911   29076 out.go:352] Setting JSON to false
	I1001 19:17:00.783819   29076 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3563,"bootTime":1727806658,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 19:17:00.783920   29076 start.go:139] virtualization: kvm guest
	I1001 19:17:00.786117   29076 out.go:177] * [functional-338309] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1001 19:17:00.787450   29076 notify.go:220] Checking for updates...
	I1001 19:17:00.787466   29076 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 19:17:00.788781   29076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 19:17:00.790031   29076 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:17:00.791344   29076 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:17:00.792443   29076 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 19:17:00.793675   29076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 19:17:00.795180   29076 config.go:182] Loaded profile config "functional-338309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:17:00.795565   29076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:17:00.795609   29076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:17:00.811248   29076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38525
	I1001 19:17:00.811775   29076 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:17:00.812484   29076 main.go:141] libmachine: Using API Version  1
	I1001 19:17:00.812506   29076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:17:00.812862   29076 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:17:00.813033   29076 main.go:141] libmachine: (functional-338309) Calling .DriverName
	I1001 19:17:00.813277   29076 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 19:17:00.813656   29076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:17:00.813693   29076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:17:00.829159   29076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41081
	I1001 19:17:00.829782   29076 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:17:00.830357   29076 main.go:141] libmachine: Using API Version  1
	I1001 19:17:00.830378   29076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:17:00.830801   29076 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:17:00.831018   29076 main.go:141] libmachine: (functional-338309) Calling .DriverName
	I1001 19:17:00.870264   29076 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1001 19:17:00.871461   29076 start.go:297] selected driver: kvm2
	I1001 19:17:00.871481   29076 start.go:901] validating driver "kvm2" against &{Name:functional-338309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-338309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.74 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:17:00.871628   29076 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 19:17:00.873981   29076 out.go:201] 
	W1001 19:17:00.875279   29076 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1001 19:17:00.876374   29076 out.go:201] 
	
	
	==> CRI-O <==
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.431494611Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ab71d27-8c88-4fad-9e62-b87ad0004adf name=/runtime.v1.RuntimeService/Version
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.432614027Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eb278ee8-f13e-4371-ba53-faee5e94ce2b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.433520150Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810385433492841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252069,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb278ee8-f13e-4371-ba53-faee5e94ce2b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.434114769Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f53a5c0b-a68c-49ba-84d9-5bac3fef847f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.434218597Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f53a5c0b-a68c-49ba-84d9-5bac3fef847f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.434606627Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:63e36c4e921e09e0758ac31301f717486f85b5c77916ac866014d3764856e449,PodSandboxId:e12658557fb5e9322ded1ab95e267bc59b7e29f2b2e7b554247fbeeaa6664fc7,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1727810234185674733,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-jdjnl,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 471a838b-85e1-421a-aece-6479ee1cd1a3,},Annotations:map[string]string{io.kube
rnetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:439fea055431366a443cef48341c3008fdeb0d28c7c547f23d091080e6c38631,PodSandboxId:26cd5579cf50bbf72d92f1556913f076efcb15fd68b9294944cdbf1be668116f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1727810230988695409,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-bjxfz,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 38765d6c-32fc-47a6-a519-324cdae87d5e,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d9383d9dab31e0a93eda6e0efe9d35defc098085e1cdc99bd3fa37f1a01836,PodSandboxId:00e78ade8bde48ab0096bfc31b6d988d9493670b0eb97c584f49e4293257102c,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1727810224223844578,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c1d48946-f6c3-4c1f-995a-12daa3e79d50,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e03e89ffa2c479baa6eca030b43df1ed2db36cbf5b5cb9500251e53b22aecc2,PodSandboxId:2df3e3649ccf7dfac55ed30a41883c3132da1f5f7cd25ea7ee0778a6ab88c3bf,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1727810213793898315,Labels:map[string]string{io.kubernetes.container.name: echoserver,i
o.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-9mtcl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: abc7b6fd-f718-461c-bb6a-f17f81d11687,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b862d61f06a874b2f487a2db9217fcbf03e573598b330ade7a01e4b024ffe8,PodSandboxId:ff6008bbd616002a9fbacd94424824c520f69a64c5f2eb12be8595f378d33a92,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1727810212610406347,Labels:map[string]string{io.kubernetes.container.na
me: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-pkvnh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3585567d-c617-46a4-9d6f-a0b7bf099087,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba585dedf28e5199c348b0f2795013c70dcaeb8d7f4612d454d36f43f6cd58d,PodSandboxId:594d89ab2c64243bf1a2d1946cb93bcc613895cc43626c37ab545f5fe9f64187,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1727810208214115576,Labels:map[string]string{io.kubernetes.container.na
me: mysql,io.kubernetes.pod.name: mysql-6cdb49bbb-rkbcv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16385339-46f2-4a5e-ac4b-5c53d81e7422,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2bc7203452e500b37a60575cca742c03f776ff909abbf7d4bd78de49ab061bc,PodSandboxId:2bbd360553f4c7f89debebbffb0bb58cddf9e32a1f6a9b40956a14a09738d4ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17
27810167771178923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79f8bf30-f1ae-4885-92cb-e89e9b0e59df,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b003210094443b664c832144bcb3affe784f2cb51fabffcf5d43836e954a9c,PodSandboxId:65d7efcc7db8c3719c0a0078c26446c231e1e800c5502545e62434175efdc0c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810167757802141,Labels
:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sg2wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5daab43f-4305-4d11-a210-9f33cabbf773,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a017fe864389c711635adf9004e6a874981a3c80714db43a828d62c99795b0,PodSandboxId:06f1fe5458693243114bf98dff300849edcac4b52a756f8621d948d63f981f83,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727810167769458660,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bznr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1977d5b8-5d2c-4102-8fb5-0be51b881658,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702f1d82e337429efdff9fa044201c8564204b746b89d56ee2bad511935426c4,PodSandboxId:7166422aca98622c52cef7aeb25baf5efd31b43dc1edfe331e786d3892b69037,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727810164049861075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1839d4a92cbef024e8add8e8192f6d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eebbbc7830d2cdf271caa6836cf33a55b5003738185a1402e9d1bf026b669716,PodSandboxId:5c5a069c73fd5c21678219324f9c3d7d0143750a44ad1fb9b84d85dc3453470f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727810163919925836,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d6a905e83d59f2bc9eea4c02b2b8617,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9fdc640149899ab40c0830c96f3f1a37e5f66ad175458029198b780c17a0d32,PodSandboxId:32f0809dd5f47fe7fd8fb8c2abd03fb2ac1d0277d9730e7f1d62d92251428477,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad94157
5eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727810163889342722,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d3487f863c8dc14be74b7aa74e475c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd05252d12e12bc93bdd4d0912e108e0e263f44ede79b4a9277f1fc030be13a,PodSandboxId:76d82abcb65e0d2992bccf9be01fc188e9205cd5f8c8c1a9d1e7d53b30aa9ab3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90b
ae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727810163871694082,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfe786dfbabf7be43d6ef6c728169e72,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b474969fe1ad9e98eb9e951e490b0e0346ea1e14e6c6289a3b2ee13e332e2ee,PodSandboxId:8fb9bca225e439ba8e1509be70783bed9774de0d49ee548408e3bf38df9c12f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc4
8af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727810127341116400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sg2wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5daab43f-4305-4d11-a210-9f33cabbf773,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b128a35a281f6eff601083d538aaba63dfd5c9facfcb2be8db338c3476f0426a,PodSandboxId:5641f8eecac310537c19d25d2b608bd291690f52aa6f7b4541174bfb6aa79550,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727810127066874695,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79f8bf30-f1ae-4885-92cb-e89e9b0e59df,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98456691a01ced88e05c8906cf952327247b7cb82fa85cd400c7538b11a42e7,PodSandboxId:10f6b17e1c83d919d48a604b695548263d680305680590b46da13c8c5cec4a2e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},I
mage:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727810127024829441,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bznr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1977d5b8-5d2c-4102-8fb5-0be51b881658,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2026884f63bc418e0f2b997c82d10516a1c6057821f05f4cc0f4cc78cedb9513,PodSandboxId:f90ce1c80bb48346401f60743e8ee06e101434ad4c9e52b320f7acd0e537fee7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad9
41575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727810123261676562,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d3487f863c8dc14be74b7aa74e475c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687d938ad63591e6c13a4e2ee91a7d642ddccee564c392f7fd684d6556b25e9c,PodSandboxId:aa07cfeeebd5445fa7a78c0419f4ea8cbd18af05dbf4241659706a92a9924ed6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d
90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727810123217942793,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfe786dfbabf7be43d6ef6c728169e72,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0484ee171a31951f7b237a2c4bbbdb574cee300d802f50f7d80bb1263d85444,PodSandboxId:b7b3c452d6a9119216b00140025cb317476cd1397316040edb6102b832910f0f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d269
15af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727810123222214238,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d6a905e83d59f2bc9eea4c02b2b8617,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f53a5c0b-a68c-49ba-84d9-5bac3fef847f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.455685268Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b08498e-4f96-4dd9-b118-f2f84e9aef6f name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.456025665Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:26cd5579cf50bbf72d92f1556913f076efcb15fd68b9294944cdbf1be668116f,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-695b96c756-bjxfz,Uid:38765d6c-32fc-47a6-a519-324cdae87d5e,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727810222609018743,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-bjxfz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 38765d6c-32fc-47a6-a519-324cdae87d5e,k8s-app: kubernetes-dashboard,pod-template-hash: 695b96c756,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-01T19:17:02.269306734Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e12658557fb5e9322ded1ab95e267bc59b7e29f2b2e7b554247fbeeaa6664fc7,Metadata:&PodSandboxMetadata{Name
:dashboard-metrics-scraper-c5db448b4-jdjnl,Uid:471a838b-85e1-421a-aece-6479ee1cd1a3,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727810222598347704,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-jdjnl,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 471a838b-85e1-421a-aece-6479ee1cd1a3,k8s-app: dashboard-metrics-scraper,pod-template-hash: c5db448b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-01T19:17:02.275358641Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:2df3e3649ccf7dfac55ed30a41883c3132da1f5f7cd25ea7ee0778a6ab88c3bf,Metadata:&PodSandboxMetadata{Name:hello-node-connect-67bdd5bbb4-9mtcl,Uid:abc7b6fd-f718-461c-bb6a-f17f81d11687,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727810213520249763,Labels:map[string]string{app: hello-node-connect,io.kubernetes.container.nam
e: POD,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-9mtcl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: abc7b6fd-f718-461c-bb6a-f17f81d11687,pod-template-hash: 67bdd5bbb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-01T19:16:53.213360577Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ff6008bbd616002a9fbacd94424824c520f69a64c5f2eb12be8595f378d33a92,Metadata:&PodSandboxMetadata{Name:hello-node-6b9f76b5c7-pkvnh,Uid:3585567d-c617-46a4-9d6f-a0b7bf099087,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727810195874197189,Labels:map[string]string{app: hello-node,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-6b9f76b5c7-pkvnh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3585567d-c617-46a4-9d6f-a0b7bf099087,pod-template-hash: 6b9f76b5c7,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-01T19:16:35.565634419Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:594d89ab2c64
243bf1a2d1946cb93bcc613895cc43626c37ab545f5fe9f64187,Metadata:&PodSandboxMetadata{Name:mysql-6cdb49bbb-rkbcv,Uid:16385339-46f2-4a5e-ac4b-5c53d81e7422,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727810195206158153,Labels:map[string]string{app: mysql,io.kubernetes.container.name: POD,io.kubernetes.pod.name: mysql-6cdb49bbb-rkbcv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16385339-46f2-4a5e-ac4b-5c53d81e7422,pod-template-hash: 6cdb49bbb,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-01T19:16:34.876216494Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7166422aca98622c52cef7aeb25baf5efd31b43dc1edfe331e786d3892b69037,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-338309,Uid:5c1839d4a92cbef024e8add8e8192f6d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727810163908755448,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-338309
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1839d4a92cbef024e8add8e8192f6d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.74:8441,kubernetes.io/config.hash: 5c1839d4a92cbef024e8add8e8192f6d,kubernetes.io/config.seen: 2024-10-01T19:16:03.434422713Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:65d7efcc7db8c3719c0a0078c26446c231e1e800c5502545e62434175efdc0c8,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-sg2wn,Uid:5daab43f-4305-4d11-a210-9f33cabbf773,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727810161309266493,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-sg2wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5daab43f-4305-4d11-a210-9f33cabbf773,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-01T19:15:26.538486637Z,kuberne
tes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:06f1fe5458693243114bf98dff300849edcac4b52a756f8621d948d63f981f83,Metadata:&PodSandboxMetadata{Name:kube-proxy-bznr6,Uid:1977d5b8-5d2c-4102-8fb5-0be51b881658,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727810161045065765,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-bznr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1977d5b8-5d2c-4102-8fb5-0be51b881658,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-01T19:15:26.538497021Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2bbd360553f4c7f89debebbffb0bb58cddf9e32a1f6a9b40956a14a09738d4ab,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:79f8bf30-f1ae-4885-92cb-e89e9b0e59df,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727810160954448334,Labels:map[string]string{addonmanag
er.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79f8bf30-f1ae-4885-92cb-e89e9b0e59df,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 202
4-10-01T19:15:26.538499868Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5c5a069c73fd5c21678219324f9c3d7d0143750a44ad1fb9b84d85dc3453470f,Metadata:&PodSandboxMetadata{Name:etcd-functional-338309,Uid:4d6a905e83d59f2bc9eea4c02b2b8617,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727810160937609339,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d6a905e83d59f2bc9eea4c02b2b8617,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.74:2379,kubernetes.io/config.hash: 4d6a905e83d59f2bc9eea4c02b2b8617,kubernetes.io/config.seen: 2024-10-01T19:15:22.538486698Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:32f0809dd5f47fe7fd8fb8c2abd03fb2ac1d0277d9730e7f1d62d92251428477,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-338309,Uid:53d3487f863c8d
c14be74b7aa74e475c,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727810160915257062,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d3487f863c8dc14be74b7aa74e475c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 53d3487f863c8dc14be74b7aa74e475c,kubernetes.io/config.seen: 2024-10-01T19:15:22.538492865Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:76d82abcb65e0d2992bccf9be01fc188e9205cd5f8c8c1a9d1e7d53b30aa9ab3,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-338309,Uid:dfe786dfbabf7be43d6ef6c728169e72,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727810160868875335,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-338309,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: dfe786dfbabf7be43d6ef6c728169e72,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dfe786dfbabf7be43d6ef6c728169e72,kubernetes.io/config.seen: 2024-10-01T19:15:22.538492004Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=2b08498e-4f96-4dd9-b118-f2f84e9aef6f name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.456683863Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=350ac058-3f69-47d6-ab41-e2555c63ebf1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.456781868Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=350ac058-3f69-47d6-ab41-e2555c63ebf1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.457048176Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:63e36c4e921e09e0758ac31301f717486f85b5c77916ac866014d3764856e449,PodSandboxId:e12658557fb5e9322ded1ab95e267bc59b7e29f2b2e7b554247fbeeaa6664fc7,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1727810234185674733,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-jdjnl,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 471a838b-85e1-421a-aece-6479ee1cd1a3,},Annotations:map[string]string{io.kube
rnetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:439fea055431366a443cef48341c3008fdeb0d28c7c547f23d091080e6c38631,PodSandboxId:26cd5579cf50bbf72d92f1556913f076efcb15fd68b9294944cdbf1be668116f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1727810230988695409,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-bjxfz,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 38765d6c-32fc-47a6-a519-324cdae87d5e,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e03e89ffa2c479baa6eca030b43df1ed2db36cbf5b5cb9500251e53b22aecc2,PodSandboxId:2df3e3649ccf7dfac55ed30a41883c3132da1f5f7cd25ea7ee0778a6ab88c3bf,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1727810213793898315,Labels:map[string]string{io.kubernetes.container.name: echoserver
,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-9mtcl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: abc7b6fd-f718-461c-bb6a-f17f81d11687,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b862d61f06a874b2f487a2db9217fcbf03e573598b330ade7a01e4b024ffe8,PodSandboxId:ff6008bbd616002a9fbacd94424824c520f69a64c5f2eb12be8595f378d33a92,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1727810212610406347,Labels:map[string]string{io.kubernetes.container.
name: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-pkvnh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3585567d-c617-46a4-9d6f-a0b7bf099087,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba585dedf28e5199c348b0f2795013c70dcaeb8d7f4612d454d36f43f6cd58d,PodSandboxId:594d89ab2c64243bf1a2d1946cb93bcc613895cc43626c37ab545f5fe9f64187,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1727810208214115576,Labels:map[string]string{io.kubernetes.container.
name: mysql,io.kubernetes.pod.name: mysql-6cdb49bbb-rkbcv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16385339-46f2-4a5e-ac4b-5c53d81e7422,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2bc7203452e500b37a60575cca742c03f776ff909abbf7d4bd78de49ab061bc,PodSandboxId:2bbd360553f4c7f89debebbffb0bb58cddf9e32a1f6a9b40956a14a09738d4ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:
1727810167771178923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79f8bf30-f1ae-4885-92cb-e89e9b0e59df,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b003210094443b664c832144bcb3affe784f2cb51fabffcf5d43836e954a9c,PodSandboxId:65d7efcc7db8c3719c0a0078c26446c231e1e800c5502545e62434175efdc0c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810167757802141,Labe
ls:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sg2wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5daab43f-4305-4d11-a210-9f33cabbf773,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a017fe864389c711635adf9004e6a874981a3c80714db43a828d62c99795b0,PodSandboxId:06f1fe5458693243114bf98dff300849edcac4b52a756f8621d948d63f981f83,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727810167769458660,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bznr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1977d5b8-5d2c-4102-8fb5-0be51b881658,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702f1d82e337429efdff9fa044201c8564204b746b89d56ee2bad511935426c4,PodSandboxId:7166422aca98622c52cef7aeb25baf5efd31b43dc1edfe331e786d3892b69037,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727810164049861075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1839d4a92cbef024e8add8e8192f6d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eebbbc7830d2cdf271caa6836cf33a55b5003738185a1402e9d1bf026b669716,PodSandboxId:5c5a069c73fd5c21678219324f9c3d7d0143750a44ad1fb9b84d85dc3453470f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727810163919925836,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d6a905e83d59f2bc9eea4c02b2b8617,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9fdc640149899ab40c0830c96f3f1a37e5f66ad175458029198b780c17a0d32,PodSandboxId:32f0809dd5f47fe7fd8fb8c2abd03fb2ac1d0277d9730e7f1d62d92251428477,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941
575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727810163889342722,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d3487f863c8dc14be74b7aa74e475c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd05252d12e12bc93bdd4d0912e108e0e263f44ede79b4a9277f1fc030be13a,PodSandboxId:76d82abcb65e0d2992bccf9be01fc188e9205cd5f8c8c1a9d1e7d53b30aa9ab3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d9
0bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727810163871694082,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfe786dfbabf7be43d6ef6c728169e72,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=350ac058-3f69-47d6-ab41-e2555c63ebf1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.469753501Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2fef19b2-520c-476c-873d-264091099af8 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.469839986Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2fef19b2-520c-476c-873d-264091099af8 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.470718266Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=983d1757-abe2-477b-b6e1-0d4f0ecfe1b6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.471633600Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810385471604237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252069,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=983d1757-abe2-477b-b6e1-0d4f0ecfe1b6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.472215049Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d77c8db2-bd79-4f95-95f9-5f127f4b1c2e name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.472272358Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d77c8db2-bd79-4f95-95f9-5f127f4b1c2e name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.472641912Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:63e36c4e921e09e0758ac31301f717486f85b5c77916ac866014d3764856e449,PodSandboxId:e12658557fb5e9322ded1ab95e267bc59b7e29f2b2e7b554247fbeeaa6664fc7,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1727810234185674733,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-jdjnl,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 471a838b-85e1-421a-aece-6479ee1cd1a3,},Annotations:map[string]string{io.kube
rnetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:439fea055431366a443cef48341c3008fdeb0d28c7c547f23d091080e6c38631,PodSandboxId:26cd5579cf50bbf72d92f1556913f076efcb15fd68b9294944cdbf1be668116f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1727810230988695409,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-bjxfz,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 38765d6c-32fc-47a6-a519-324cdae87d5e,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d9383d9dab31e0a93eda6e0efe9d35defc098085e1cdc99bd3fa37f1a01836,PodSandboxId:00e78ade8bde48ab0096bfc31b6d988d9493670b0eb97c584f49e4293257102c,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1727810224223844578,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c1d48946-f6c3-4c1f-995a-12daa3e79d50,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e03e89ffa2c479baa6eca030b43df1ed2db36cbf5b5cb9500251e53b22aecc2,PodSandboxId:2df3e3649ccf7dfac55ed30a41883c3132da1f5f7cd25ea7ee0778a6ab88c3bf,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1727810213793898315,Labels:map[string]string{io.kubernetes.container.name: echoserver,i
o.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-9mtcl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: abc7b6fd-f718-461c-bb6a-f17f81d11687,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b862d61f06a874b2f487a2db9217fcbf03e573598b330ade7a01e4b024ffe8,PodSandboxId:ff6008bbd616002a9fbacd94424824c520f69a64c5f2eb12be8595f378d33a92,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1727810212610406347,Labels:map[string]string{io.kubernetes.container.na
me: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-pkvnh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3585567d-c617-46a4-9d6f-a0b7bf099087,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba585dedf28e5199c348b0f2795013c70dcaeb8d7f4612d454d36f43f6cd58d,PodSandboxId:594d89ab2c64243bf1a2d1946cb93bcc613895cc43626c37ab545f5fe9f64187,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1727810208214115576,Labels:map[string]string{io.kubernetes.container.na
me: mysql,io.kubernetes.pod.name: mysql-6cdb49bbb-rkbcv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16385339-46f2-4a5e-ac4b-5c53d81e7422,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2bc7203452e500b37a60575cca742c03f776ff909abbf7d4bd78de49ab061bc,PodSandboxId:2bbd360553f4c7f89debebbffb0bb58cddf9e32a1f6a9b40956a14a09738d4ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17
27810167771178923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79f8bf30-f1ae-4885-92cb-e89e9b0e59df,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b003210094443b664c832144bcb3affe784f2cb51fabffcf5d43836e954a9c,PodSandboxId:65d7efcc7db8c3719c0a0078c26446c231e1e800c5502545e62434175efdc0c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810167757802141,Labels
:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sg2wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5daab43f-4305-4d11-a210-9f33cabbf773,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a017fe864389c711635adf9004e6a874981a3c80714db43a828d62c99795b0,PodSandboxId:06f1fe5458693243114bf98dff300849edcac4b52a756f8621d948d63f981f83,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727810167769458660,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bznr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1977d5b8-5d2c-4102-8fb5-0be51b881658,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702f1d82e337429efdff9fa044201c8564204b746b89d56ee2bad511935426c4,PodSandboxId:7166422aca98622c52cef7aeb25baf5efd31b43dc1edfe331e786d3892b69037,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727810164049861075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1839d4a92cbef024e8add8e8192f6d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eebbbc7830d2cdf271caa6836cf33a55b5003738185a1402e9d1bf026b669716,PodSandboxId:5c5a069c73fd5c21678219324f9c3d7d0143750a44ad1fb9b84d85dc3453470f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727810163919925836,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d6a905e83d59f2bc9eea4c02b2b8617,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9fdc640149899ab40c0830c96f3f1a37e5f66ad175458029198b780c17a0d32,PodSandboxId:32f0809dd5f47fe7fd8fb8c2abd03fb2ac1d0277d9730e7f1d62d92251428477,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad94157
5eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727810163889342722,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d3487f863c8dc14be74b7aa74e475c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd05252d12e12bc93bdd4d0912e108e0e263f44ede79b4a9277f1fc030be13a,PodSandboxId:76d82abcb65e0d2992bccf9be01fc188e9205cd5f8c8c1a9d1e7d53b30aa9ab3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90b
ae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727810163871694082,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfe786dfbabf7be43d6ef6c728169e72,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b474969fe1ad9e98eb9e951e490b0e0346ea1e14e6c6289a3b2ee13e332e2ee,PodSandboxId:8fb9bca225e439ba8e1509be70783bed9774de0d49ee548408e3bf38df9c12f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc4
8af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727810127341116400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sg2wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5daab43f-4305-4d11-a210-9f33cabbf773,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b128a35a281f6eff601083d538aaba63dfd5c9facfcb2be8db338c3476f0426a,PodSandboxId:5641f8eecac310537c19d25d2b608bd291690f52aa6f7b4541174bfb6aa79550,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727810127066874695,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79f8bf30-f1ae-4885-92cb-e89e9b0e59df,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98456691a01ced88e05c8906cf952327247b7cb82fa85cd400c7538b11a42e7,PodSandboxId:10f6b17e1c83d919d48a604b695548263d680305680590b46da13c8c5cec4a2e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},I
mage:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727810127024829441,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bznr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1977d5b8-5d2c-4102-8fb5-0be51b881658,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2026884f63bc418e0f2b997c82d10516a1c6057821f05f4cc0f4cc78cedb9513,PodSandboxId:f90ce1c80bb48346401f60743e8ee06e101434ad4c9e52b320f7acd0e537fee7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad9
41575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727810123261676562,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d3487f863c8dc14be74b7aa74e475c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687d938ad63591e6c13a4e2ee91a7d642ddccee564c392f7fd684d6556b25e9c,PodSandboxId:aa07cfeeebd5445fa7a78c0419f4ea8cbd18af05dbf4241659706a92a9924ed6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d
90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727810123217942793,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfe786dfbabf7be43d6ef6c728169e72,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0484ee171a31951f7b237a2c4bbbdb574cee300d802f50f7d80bb1263d85444,PodSandboxId:b7b3c452d6a9119216b00140025cb317476cd1397316040edb6102b832910f0f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d269
15af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727810123222214238,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d6a905e83d59f2bc9eea4c02b2b8617,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d77c8db2-bd79-4f95-95f9-5f127f4b1c2e name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.513025706Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f82e076c-b1aa-4c46-ae0f-972ac9c765ee name=/runtime.v1.RuntimeService/Version
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.513114004Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f82e076c-b1aa-4c46-ae0f-972ac9c765ee name=/runtime.v1.RuntimeService/Version
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.514730587Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0538c802-47ea-41d4-a71e-cc72adc5b9ad name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.515658793Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810385515623833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252069,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0538c802-47ea-41d4-a71e-cc72adc5b9ad name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.516301553Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25649989-1b84-43dc-8527-bffabeb589a6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.516365052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25649989-1b84-43dc-8527-bffabeb589a6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:19:45 functional-338309 crio[4331]: time="2024-10-01 19:19:45.516767059Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:63e36c4e921e09e0758ac31301f717486f85b5c77916ac866014d3764856e449,PodSandboxId:e12658557fb5e9322ded1ab95e267bc59b7e29f2b2e7b554247fbeeaa6664fc7,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1727810234185674733,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-jdjnl,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 471a838b-85e1-421a-aece-6479ee1cd1a3,},Annotations:map[string]string{io.kube
rnetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:439fea055431366a443cef48341c3008fdeb0d28c7c547f23d091080e6c38631,PodSandboxId:26cd5579cf50bbf72d92f1556913f076efcb15fd68b9294944cdbf1be668116f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1727810230988695409,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-bjxfz,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 38765d6c-32fc-47a6-a519-324cdae87d5e,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d9383d9dab31e0a93eda6e0efe9d35defc098085e1cdc99bd3fa37f1a01836,PodSandboxId:00e78ade8bde48ab0096bfc31b6d988d9493670b0eb97c584f49e4293257102c,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1727810224223844578,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c1d48946-f6c3-4c1f-995a-12daa3e79d50,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e03e89ffa2c479baa6eca030b43df1ed2db36cbf5b5cb9500251e53b22aecc2,PodSandboxId:2df3e3649ccf7dfac55ed30a41883c3132da1f5f7cd25ea7ee0778a6ab88c3bf,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1727810213793898315,Labels:map[string]string{io.kubernetes.container.name: echoserver,i
o.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-9mtcl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: abc7b6fd-f718-461c-bb6a-f17f81d11687,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b862d61f06a874b2f487a2db9217fcbf03e573598b330ade7a01e4b024ffe8,PodSandboxId:ff6008bbd616002a9fbacd94424824c520f69a64c5f2eb12be8595f378d33a92,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1727810212610406347,Labels:map[string]string{io.kubernetes.container.na
me: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-pkvnh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3585567d-c617-46a4-9d6f-a0b7bf099087,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba585dedf28e5199c348b0f2795013c70dcaeb8d7f4612d454d36f43f6cd58d,PodSandboxId:594d89ab2c64243bf1a2d1946cb93bcc613895cc43626c37ab545f5fe9f64187,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1727810208214115576,Labels:map[string]string{io.kubernetes.container.na
me: mysql,io.kubernetes.pod.name: mysql-6cdb49bbb-rkbcv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16385339-46f2-4a5e-ac4b-5c53d81e7422,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2bc7203452e500b37a60575cca742c03f776ff909abbf7d4bd78de49ab061bc,PodSandboxId:2bbd360553f4c7f89debebbffb0bb58cddf9e32a1f6a9b40956a14a09738d4ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17
27810167771178923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79f8bf30-f1ae-4885-92cb-e89e9b0e59df,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b003210094443b664c832144bcb3affe784f2cb51fabffcf5d43836e954a9c,PodSandboxId:65d7efcc7db8c3719c0a0078c26446c231e1e800c5502545e62434175efdc0c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810167757802141,Labels
:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sg2wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5daab43f-4305-4d11-a210-9f33cabbf773,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a017fe864389c711635adf9004e6a874981a3c80714db43a828d62c99795b0,PodSandboxId:06f1fe5458693243114bf98dff300849edcac4b52a756f8621d948d63f981f83,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727810167769458660,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bznr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1977d5b8-5d2c-4102-8fb5-0be51b881658,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702f1d82e337429efdff9fa044201c8564204b746b89d56ee2bad511935426c4,PodSandboxId:7166422aca98622c52cef7aeb25baf5efd31b43dc1edfe331e786d3892b69037,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727810164049861075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1839d4a92cbef024e8add8e8192f6d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eebbbc7830d2cdf271caa6836cf33a55b5003738185a1402e9d1bf026b669716,PodSandboxId:5c5a069c73fd5c21678219324f9c3d7d0143750a44ad1fb9b84d85dc3453470f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727810163919925836,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d6a905e83d59f2bc9eea4c02b2b8617,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9fdc640149899ab40c0830c96f3f1a37e5f66ad175458029198b780c17a0d32,PodSandboxId:32f0809dd5f47fe7fd8fb8c2abd03fb2ac1d0277d9730e7f1d62d92251428477,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad94157
5eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727810163889342722,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d3487f863c8dc14be74b7aa74e475c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd05252d12e12bc93bdd4d0912e108e0e263f44ede79b4a9277f1fc030be13a,PodSandboxId:76d82abcb65e0d2992bccf9be01fc188e9205cd5f8c8c1a9d1e7d53b30aa9ab3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90b
ae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727810163871694082,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfe786dfbabf7be43d6ef6c728169e72,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b474969fe1ad9e98eb9e951e490b0e0346ea1e14e6c6289a3b2ee13e332e2ee,PodSandboxId:8fb9bca225e439ba8e1509be70783bed9774de0d49ee548408e3bf38df9c12f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc4
8af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727810127341116400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sg2wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5daab43f-4305-4d11-a210-9f33cabbf773,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b128a35a281f6eff601083d538aaba63dfd5c9facfcb2be8db338c3476f0426a,PodSandboxId:5641f8eecac310537c19d25d2b608bd291690f52aa6f7b4541174bfb6aa79550,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727810127066874695,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79f8bf30-f1ae-4885-92cb-e89e9b0e59df,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98456691a01ced88e05c8906cf952327247b7cb82fa85cd400c7538b11a42e7,PodSandboxId:10f6b17e1c83d919d48a604b695548263d680305680590b46da13c8c5cec4a2e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},I
mage:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727810127024829441,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bznr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1977d5b8-5d2c-4102-8fb5-0be51b881658,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2026884f63bc418e0f2b997c82d10516a1c6057821f05f4cc0f4cc78cedb9513,PodSandboxId:f90ce1c80bb48346401f60743e8ee06e101434ad4c9e52b320f7acd0e537fee7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad9
41575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727810123261676562,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d3487f863c8dc14be74b7aa74e475c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687d938ad63591e6c13a4e2ee91a7d642ddccee564c392f7fd684d6556b25e9c,PodSandboxId:aa07cfeeebd5445fa7a78c0419f4ea8cbd18af05dbf4241659706a92a9924ed6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d
90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727810123217942793,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfe786dfbabf7be43d6ef6c728169e72,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0484ee171a31951f7b237a2c4bbbdb574cee300d802f50f7d80bb1263d85444,PodSandboxId:b7b3c452d6a9119216b00140025cb317476cd1397316040edb6102b832910f0f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d269
15af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727810123222214238,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-338309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d6a905e83d59f2bc9eea4c02b2b8617,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=25649989-1b84-43dc-8527-bffabeb589a6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	63e36c4e921e0       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   2 minutes ago       Running             dashboard-metrics-scraper   0                   e12658557fb5e       dashboard-metrics-scraper-c5db448b4-jdjnl
	439fea0554313       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         2 minutes ago       Running             kubernetes-dashboard        0                   26cd5579cf50b       kubernetes-dashboard-695b96c756-bjxfz
	01d9383d9dab3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              2 minutes ago       Exited              mount-munger                0                   00e78ade8bde4       busybox-mount
	2e03e89ffa2c4       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                                 2 minutes ago       Running             echoserver                  0                   2df3e3649ccf7       hello-node-connect-67bdd5bbb4-9mtcl
	89b862d61f06a       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               2 minutes ago       Running             echoserver                  0                   ff6008bbd6160       hello-node-6b9f76b5c7-pkvnh
	aba585dedf28e       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  2 minutes ago       Running             mysql                       0                   594d89ab2c642       mysql-6cdb49bbb-rkbcv
	f2bc7203452e5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 3 minutes ago       Running             storage-provisioner         2                   2bbd360553f4c       storage-provisioner
	c7a017fe86438       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 3 minutes ago       Running             kube-proxy                  2                   06f1fe5458693       kube-proxy-bznr6
	46b0032100944       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 3 minutes ago       Running             coredns                     2                   65d7efcc7db8c       coredns-7c65d6cfc9-sg2wn
	702f1d82e3374       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                 3 minutes ago       Running             kube-apiserver              0                   7166422aca986       kube-apiserver-functional-338309
	eebbbc7830d2c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 3 minutes ago       Running             etcd                        2                   5c5a069c73fd5       etcd-functional-338309
	f9fdc64014989       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 3 minutes ago       Running             kube-scheduler              2                   32f0809dd5f47       kube-scheduler-functional-338309
	5bd05252d12e1       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 3 minutes ago       Running             kube-controller-manager     2                   76d82abcb65e0       kube-controller-manager-functional-338309
	9b474969fe1ad       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 4 minutes ago       Exited              coredns                     1                   8fb9bca225e43       coredns-7c65d6cfc9-sg2wn
	b128a35a281f6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 4 minutes ago       Exited              storage-provisioner         1                   5641f8eecac31       storage-provisioner
	b98456691a01c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 4 minutes ago       Exited              kube-proxy                  1                   10f6b17e1c83d       kube-proxy-bznr6
	2026884f63bc4       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 4 minutes ago       Exited              kube-scheduler              1                   f90ce1c80bb48       kube-scheduler-functional-338309
	c0484ee171a31       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 4 minutes ago       Exited              etcd                        1                   b7b3c452d6a91       etcd-functional-338309
	687d938ad6359       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 4 minutes ago       Exited              kube-controller-manager     1                   aa07cfeeebd54       kube-controller-manager-functional-338309
	
	
	==> coredns [46b003210094443b664c832144bcb3affe784f2cb51fabffcf5d43836e954a9c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59542 - 64305 "HINFO IN 1596116178045416652.9111813745425029087. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012176729s
	
	
	==> coredns [9b474969fe1ad9e98eb9e951e490b0e0346ea1e14e6c6289a3b2ee13e332e2ee] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51144 - 22300 "HINFO IN 8871226449279754182.6817314419821737241. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014248784s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-338309
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-338309
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=functional-338309
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T19_14_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:14:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-338309
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:19:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:17:38 +0000   Tue, 01 Oct 2024 19:14:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:17:38 +0000   Tue, 01 Oct 2024 19:14:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:17:38 +0000   Tue, 01 Oct 2024 19:14:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:17:38 +0000   Tue, 01 Oct 2024 19:14:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.74
	  Hostname:    functional-338309
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 516205bdf04c4d9d916c522e9d03301e
	  System UUID:                516205bd-f04c-4d9d-916c-522e9d03301e
	  Boot ID:                    cbb5914c-0f1f-48f7-bcb8-adbd67f8fa97
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6b9f76b5c7-pkvnh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m10s
	  default                     hello-node-connect-67bdd5bbb4-9mtcl          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m52s
	  default                     mysql-6cdb49bbb-rkbcv                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (18%)    3m11s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  kube-system                 coredns-7c65d6cfc9-sg2wn                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m9s
	  kube-system                 etcd-functional-338309                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m15s
	  kube-system                 kube-apiserver-functional-338309             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 kube-controller-manager-functional-338309    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-proxy-bznr6                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-scheduler-functional-338309             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-jdjnl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m43s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-bjxfz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m7s                   kube-proxy       
	  Normal  Starting                 3m37s                  kube-proxy       
	  Normal  Starting                 4m18s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m19s (x8 over 5m19s)  kubelet          Node functional-338309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m19s (x8 over 5m19s)  kubelet          Node functional-338309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m19s (x7 over 5m19s)  kubelet          Node functional-338309 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m13s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m13s                  kubelet          Node functional-338309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m13s                  kubelet          Node functional-338309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m13s                  kubelet          Node functional-338309 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m12s                  kubelet          Node functional-338309 status is now: NodeReady
	  Normal  RegisteredNode           5m9s                   node-controller  Node functional-338309 event: Registered Node functional-338309 in Controller
	  Normal  Starting                 4m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m23s (x8 over 4m23s)  kubelet          Node functional-338309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x8 over 4m23s)  kubelet          Node functional-338309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x7 over 4m23s)  kubelet          Node functional-338309 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m16s                  node-controller  Node functional-338309 event: Registered Node functional-338309 in Controller
	  Normal  NodeHasNoDiskPressure    3m42s (x8 over 3m42s)  kubelet          Node functional-338309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  3m42s (x8 over 3m42s)  kubelet          Node functional-338309 status is now: NodeHasSufficientMemory
	  Normal  Starting                 3m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     3m42s (x7 over 3m42s)  kubelet          Node functional-338309 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m35s                  node-controller  Node functional-338309 event: Registered Node functional-338309 in Controller
	
	
	==> dmesg <==
	[  +0.182208] systemd-fstab-generator[2361]: Ignoring "noauto" option for root device
	[  +0.144199] systemd-fstab-generator[2373]: Ignoring "noauto" option for root device
	[  +0.275590] systemd-fstab-generator[2401]: Ignoring "noauto" option for root device
	[  +0.685763] systemd-fstab-generator[2520]: Ignoring "noauto" option for root device
	[  +1.821401] systemd-fstab-generator[2642]: Ignoring "noauto" option for root device
	[  +4.555660] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.697971] systemd-fstab-generator[3402]: Ignoring "noauto" option for root device
	[  +0.106759] kauditd_printk_skb: 37 callbacks suppressed
	[ +19.079776] systemd-fstab-generator[4257]: Ignoring "noauto" option for root device
	[  +0.078995] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.066285] systemd-fstab-generator[4269]: Ignoring "noauto" option for root device
	[  +0.167708] systemd-fstab-generator[4283]: Ignoring "noauto" option for root device
	[  +0.145627] systemd-fstab-generator[4295]: Ignoring "noauto" option for root device
	[  +0.265957] systemd-fstab-generator[4323]: Ignoring "noauto" option for root device
	[  +0.941495] systemd-fstab-generator[4445]: Ignoring "noauto" option for root device
	[Oct 1 19:16] systemd-fstab-generator[4914]: Ignoring "noauto" option for root device
	[  +0.721988] kauditd_printk_skb: 206 callbacks suppressed
	[  +6.328671] kauditd_printk_skb: 35 callbacks suppressed
	[ +13.672696] systemd-fstab-generator[5453]: Ignoring "noauto" option for root device
	[  +6.157342] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.119969] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.710641] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.325341] kauditd_printk_skb: 15 callbacks suppressed
	[Oct 1 19:17] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.498369] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [c0484ee171a31951f7b237a2c4bbbdb574cee300d802f50f7d80bb1263d85444] <==
	{"level":"info","ts":"2024-10-01T19:15:24.971889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7645e49063b72e60 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-01T19:15:24.971955Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7645e49063b72e60 received MsgPreVoteResp from 7645e49063b72e60 at term 2"}
	{"level":"info","ts":"2024-10-01T19:15:24.971993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7645e49063b72e60 became candidate at term 3"}
	{"level":"info","ts":"2024-10-01T19:15:24.972027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7645e49063b72e60 received MsgVoteResp from 7645e49063b72e60 at term 3"}
	{"level":"info","ts":"2024-10-01T19:15:24.972054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7645e49063b72e60 became leader at term 3"}
	{"level":"info","ts":"2024-10-01T19:15:24.972079Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7645e49063b72e60 elected leader 7645e49063b72e60 at term 3"}
	{"level":"info","ts":"2024-10-01T19:15:24.977404Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T19:15:24.977407Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7645e49063b72e60","local-member-attributes":"{Name:functional-338309 ClientURLs:[https://192.168.50.74:2379]}","request-path":"/0/members/7645e49063b72e60/attributes","cluster-id":"ff8624d4cf0220d5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-01T19:15:24.978016Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T19:15:24.978284Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-01T19:15:24.978311Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-01T19:15:24.978736Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T19:15:24.978809Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T19:15:24.979700Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.74:2379"}
	{"level":"info","ts":"2024-10-01T19:15:24.979749Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-01T19:15:52.638707Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-01T19:15:52.638782Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-338309","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.74:2380"],"advertise-client-urls":["https://192.168.50.74:2379"]}
	{"level":"warn","ts":"2024-10-01T19:15:52.638845Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-01T19:15:52.638960Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-01T19:15:52.694312Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.74:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-01T19:15:52.694400Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.74:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-01T19:15:52.694467Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7645e49063b72e60","current-leader-member-id":"7645e49063b72e60"}
	{"level":"info","ts":"2024-10-01T19:15:52.697730Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.74:2380"}
	{"level":"info","ts":"2024-10-01T19:15:52.697924Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.74:2380"}
	{"level":"info","ts":"2024-10-01T19:15:52.697960Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-338309","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.74:2380"],"advertise-client-urls":["https://192.168.50.74:2379"]}
	
	
	==> etcd [eebbbc7830d2cdf271caa6836cf33a55b5003738185a1402e9d1bf026b669716] <==
	{"level":"info","ts":"2024-10-01T19:16:41.490564Z","caller":"traceutil/trace.go:171","msg":"trace[206467220] linearizableReadLoop","detail":"{readStateIndex:798; appliedIndex:797; }","duration":"287.371446ms","start":"2024-10-01T19:16:41.203166Z","end":"2024-10-01T19:16:41.490537Z","steps":["trace[206467220] 'read index received'  (duration: 287.224744ms)","trace[206467220] 'applied index is now lower than readState.Index'  (duration: 146.182µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-01T19:16:41.490877Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"287.617255ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T19:16:41.490923Z","caller":"traceutil/trace.go:171","msg":"trace[197947899] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:732; }","duration":"287.751915ms","start":"2024-10-01T19:16:41.203161Z","end":"2024-10-01T19:16:41.490912Z","steps":["trace[197947899] 'agreement among raft nodes before linearized reading'  (duration: 287.539836ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T19:16:41.491276Z","caller":"traceutil/trace.go:171","msg":"trace[1805481663] transaction","detail":"{read_only:false; response_revision:732; number_of_response:1; }","duration":"350.446425ms","start":"2024-10-01T19:16:41.140809Z","end":"2024-10-01T19:16:41.491255Z","steps":["trace[1805481663] 'process raft request'  (duration: 349.625649ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T19:16:41.491650Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T19:16:41.140791Z","time spent":"350.514777ms","remote":"127.0.0.1:34510","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":827,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/persistentvolumeclaims/default/myclaim\" mod_revision:0 > success:<request_put:<key:\"/registry/persistentvolumeclaims/default/myclaim\" value_size:771 >> failure:<>"}
	{"level":"warn","ts":"2024-10-01T19:16:44.087792Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.698958ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T19:16:44.087950Z","caller":"traceutil/trace.go:171","msg":"trace[834001969] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:742; }","duration":"256.823235ms","start":"2024-10-01T19:16:43.831069Z","end":"2024-10-01T19:16:44.087892Z","steps":["trace[834001969] 'range keys from in-memory index tree'  (duration: 256.595706ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T19:16:47.883011Z","caller":"traceutil/trace.go:171","msg":"trace[1807549779] transaction","detail":"{read_only:false; response_revision:748; number_of_response:1; }","duration":"488.0183ms","start":"2024-10-01T19:16:47.394977Z","end":"2024-10-01T19:16:47.882995Z","steps":["trace[1807549779] 'process raft request'  (duration: 487.86904ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T19:16:47.883554Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T19:16:47.394961Z","time spent":"488.269915ms","remote":"127.0.0.1:34608","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":557,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/functional-338309\" mod_revision:727 > success:<request_put:<key:\"/registry/leases/kube-node-lease/functional-338309\" value_size:499 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/functional-338309\" > >"}
	{"level":"info","ts":"2024-10-01T19:16:47.900924Z","caller":"traceutil/trace.go:171","msg":"trace[1748578924] linearizableReadLoop","detail":"{readStateIndex:816; appliedIndex:815; }","duration":"256.894896ms","start":"2024-10-01T19:16:47.644016Z","end":"2024-10-01T19:16:47.900911Z","steps":["trace[1748578924] 'read index received'  (duration: 240.148411ms)","trace[1748578924] 'applied index is now lower than readState.Index'  (duration: 16.745886ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-01T19:16:47.901293Z","caller":"traceutil/trace.go:171","msg":"trace[581030428] transaction","detail":"{read_only:false; response_revision:749; number_of_response:1; }","duration":"308.364225ms","start":"2024-10-01T19:16:47.592918Z","end":"2024-10-01T19:16:47.901282Z","steps":["trace[581030428] 'process raft request'  (duration: 307.912621ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T19:16:47.902521Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T19:16:47.592901Z","time spent":"308.454587ms","remote":"127.0.0.1:34608","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":682,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-hpyvbtokizfwla5m3633rrz24y\" mod_revision:728 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-hpyvbtokizfwla5m3633rrz24y\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-hpyvbtokizfwla5m3633rrz24y\" > >"}
	{"level":"warn","ts":"2024-10-01T19:16:47.902796Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.769721ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T19:16:47.902819Z","caller":"traceutil/trace.go:171","msg":"trace[244903646] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:749; }","duration":"258.801015ms","start":"2024-10-01T19:16:47.644011Z","end":"2024-10-01T19:16:47.902812Z","steps":["trace[244903646] 'agreement among raft nodes before linearized reading'  (duration: 258.722291ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T19:16:47.903979Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"249.446079ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2024-10-01T19:16:47.904391Z","caller":"traceutil/trace.go:171","msg":"trace[1360767397] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:749; }","duration":"249.86911ms","start":"2024-10-01T19:16:47.654509Z","end":"2024-10-01T19:16:47.904379Z","steps":["trace[1360767397] 'agreement among raft nodes before linearized reading'  (duration: 248.917953ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T19:16:52.237280Z","caller":"traceutil/trace.go:171","msg":"trace[1694012444] transaction","detail":"{read_only:false; response_revision:761; number_of_response:1; }","duration":"255.892502ms","start":"2024-10-01T19:16:51.981372Z","end":"2024-10-01T19:16:52.237265Z","steps":["trace[1694012444] 'process raft request'  (duration: 255.659649ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T19:16:52.238005Z","caller":"traceutil/trace.go:171","msg":"trace[693093936] linearizableReadLoop","detail":"{readStateIndex:829; appliedIndex:829; }","duration":"183.290371ms","start":"2024-10-01T19:16:52.054706Z","end":"2024-10-01T19:16:52.237996Z","steps":["trace[693093936] 'read index received'  (duration: 183.287361ms)","trace[693093936] 'applied index is now lower than readState.Index'  (duration: 2.501µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-01T19:16:52.239452Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.727891ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T19:16:52.239727Z","caller":"traceutil/trace.go:171","msg":"trace[1742204363] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:761; }","duration":"185.013597ms","start":"2024-10-01T19:16:52.054699Z","end":"2024-10-01T19:16:52.239713Z","steps":["trace[1742204363] 'agreement among raft nodes before linearized reading'  (duration: 184.717493ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T19:17:00.398307Z","caller":"traceutil/trace.go:171","msg":"trace[1273947206] transaction","detail":"{read_only:false; response_revision:802; number_of_response:1; }","duration":"103.122596ms","start":"2024-10-01T19:17:00.295158Z","end":"2024-10-01T19:17:00.398280Z","steps":["trace[1273947206] 'process raft request'  (duration: 102.749059ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T19:17:10.800056Z","caller":"traceutil/trace.go:171","msg":"trace[1300526808] transaction","detail":"{read_only:false; response_revision:882; number_of_response:1; }","duration":"329.001639ms","start":"2024-10-01T19:17:10.471034Z","end":"2024-10-01T19:17:10.800036Z","steps":["trace[1300526808] 'process raft request'  (duration: 328.889218ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T19:17:10.800271Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T19:17:10.471016Z","time spent":"329.179861ms","remote":"127.0.0.1:34518","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:881 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-10-01T19:17:16.930531Z","caller":"traceutil/trace.go:171","msg":"trace[1369221982] transaction","detail":"{read_only:false; response_revision:903; number_of_response:1; }","duration":"102.525059ms","start":"2024-10-01T19:17:16.827992Z","end":"2024-10-01T19:17:16.930517Z","steps":["trace[1369221982] 'process raft request'  (duration: 102.398437ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T19:17:43.240226Z","caller":"traceutil/trace.go:171","msg":"trace[1990898145] transaction","detail":"{read_only:false; response_revision:926; number_of_response:1; }","duration":"177.968912ms","start":"2024-10-01T19:17:43.062242Z","end":"2024-10-01T19:17:43.240211Z","steps":["trace[1990898145] 'process raft request'  (duration: 177.58929ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:19:45 up 5 min,  0 users,  load average: 0.29, 0.42, 0.20
	Linux functional-338309 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [702f1d82e337429efdff9fa044201c8564204b746b89d56ee2bad511935426c4] <==
	I1001 19:16:06.798327       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1001 19:16:06.798470       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1001 19:16:06.798509       1 policy_source.go:224] refreshing policies
	I1001 19:16:06.798916       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E1001 19:16:06.805479       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1001 19:16:06.818490       1 cache.go:39] Caches are synced for autoregister controller
	I1001 19:16:06.833516       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1001 19:16:07.595828       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1001 19:16:08.191839       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1001 19:16:08.210920       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1001 19:16:08.254229       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1001 19:16:08.283941       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1001 19:16:08.293544       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1001 19:16:10.079532       1 controller.go:615] quota admission added evaluator for: endpoints
	I1001 19:16:10.280698       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1001 19:16:30.109479       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.19.131"}
	I1001 19:16:34.776965       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.98.35.68"}
	I1001 19:16:34.826673       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1001 19:16:35.638174       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.24.149"}
	I1001 19:16:53.260788       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.239.152"}
	E1001 19:16:55.968602       1 conn.go:339] Error on socket receive: read tcp 192.168.50.74:8441->192.168.50.1:32828: use of closed network connection
	E1001 19:16:57.473613       1 conn.go:339] Error on socket receive: read tcp 192.168.50.74:8441->192.168.50.1:32838: use of closed network connection
	I1001 19:17:01.977793       1 controller.go:615] quota admission added evaluator for: namespaces
	I1001 19:17:02.400537       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.46.125"}
	I1001 19:17:02.456918       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.145.98"}
	
	
	==> kube-controller-manager [5bd05252d12e12bc93bdd4d0912e108e0e263f44ede79b4a9277f1fc030be13a] <==
	I1001 19:17:02.174082       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="20.43886ms"
	E1001 19:17:02.176294       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1001 19:17:02.174169       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="63.346548ms"
	E1001 19:17:02.176559       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1001 19:17:02.205048       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="27.219753ms"
	E1001 19:17:02.205079       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1001 19:17:02.213999       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="35.825926ms"
	E1001 19:17:02.214042       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1001 19:17:02.220259       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="10.648169ms"
	E1001 19:17:02.220954       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1001 19:17:02.230462       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="15.216795ms"
	E1001 19:17:02.230511       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1001 19:17:02.278401       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="56.164575ms"
	I1001 19:17:02.303194       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="63.956517ms"
	I1001 19:17:02.316378       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="37.921298ms"
	I1001 19:17:02.316485       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="60.063µs"
	I1001 19:17:02.338062       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="34.787516ms"
	I1001 19:17:02.340352       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="196.811µs"
	I1001 19:17:02.372558       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="93.023µs"
	I1001 19:17:08.100010       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-338309"
	I1001 19:17:11.469233       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="19.175949ms"
	I1001 19:17:11.469340       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="44.656µs"
	I1001 19:17:14.481401       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="13.273404ms"
	I1001 19:17:14.481571       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="57.046µs"
	I1001 19:17:38.840288       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-338309"
	
	
	==> kube-controller-manager [687d938ad63591e6c13a4e2ee91a7d642ddccee564c392f7fd684d6556b25e9c] <==
	I1001 19:15:29.748550       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1001 19:15:29.755239       1 shared_informer.go:320] Caches are synced for TTL
	I1001 19:15:29.758740       1 shared_informer.go:320] Caches are synced for PVC protection
	I1001 19:15:29.759921       1 shared_informer.go:320] Caches are synced for daemon sets
	I1001 19:15:29.761241       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1001 19:15:29.767458       1 shared_informer.go:320] Caches are synced for node
	I1001 19:15:29.767598       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1001 19:15:29.767657       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1001 19:15:29.767683       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1001 19:15:29.767745       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1001 19:15:29.767842       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-338309"
	I1001 19:15:29.767875       1 shared_informer.go:320] Caches are synced for persistent volume
	I1001 19:15:29.770708       1 shared_informer.go:320] Caches are synced for taint
	I1001 19:15:29.770924       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1001 19:15:29.771010       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-338309"
	I1001 19:15:29.771068       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1001 19:15:29.774884       1 shared_informer.go:320] Caches are synced for GC
	I1001 19:15:29.799155       1 shared_informer.go:320] Caches are synced for resource quota
	I1001 19:15:29.812258       1 shared_informer.go:320] Caches are synced for attach detach
	I1001 19:15:29.824877       1 shared_informer.go:320] Caches are synced for resource quota
	I1001 19:15:29.850403       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1001 19:15:29.865405       1 shared_informer.go:320] Caches are synced for endpoint
	I1001 19:15:30.245065       1 shared_informer.go:320] Caches are synced for garbage collector
	I1001 19:15:30.310790       1 shared_informer.go:320] Caches are synced for garbage collector
	I1001 19:15:30.310834       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [b98456691a01ced88e05c8906cf952327247b7cb82fa85cd400c7538b11a42e7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 19:15:27.448438       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 19:15:27.466760       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.74"]
	E1001 19:15:27.466976       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 19:15:27.521420       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 19:15:27.521528       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 19:15:27.521571       1 server_linux.go:169] "Using iptables Proxier"
	I1001 19:15:27.525950       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 19:15:27.526316       1 server.go:483] "Version info" version="v1.31.1"
	I1001 19:15:27.526468       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:15:27.527870       1 config.go:199] "Starting service config controller"
	I1001 19:15:27.527963       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 19:15:27.528036       1 config.go:105] "Starting endpoint slice config controller"
	I1001 19:15:27.528059       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 19:15:27.528565       1 config.go:328] "Starting node config controller"
	I1001 19:15:27.528630       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 19:15:27.628636       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 19:15:27.628708       1 shared_informer.go:320] Caches are synced for service config
	I1001 19:15:27.628928       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c7a017fe864389c711635adf9004e6a874981a3c80714db43a828d62c99795b0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 19:16:08.111076       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 19:16:08.118780       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.74"]
	E1001 19:16:08.118862       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 19:16:08.170517       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 19:16:08.170569       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 19:16:08.170593       1 server_linux.go:169] "Using iptables Proxier"
	I1001 19:16:08.174315       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 19:16:08.174558       1 server.go:483] "Version info" version="v1.31.1"
	I1001 19:16:08.174589       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:16:08.181348       1 config.go:199] "Starting service config controller"
	I1001 19:16:08.181444       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 19:16:08.181529       1 config.go:105] "Starting endpoint slice config controller"
	I1001 19:16:08.181573       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 19:16:08.182595       1 config.go:328] "Starting node config controller"
	I1001 19:16:08.182673       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 19:16:08.281676       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 19:16:08.281818       1 shared_informer.go:320] Caches are synced for service config
	I1001 19:16:08.283104       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2026884f63bc418e0f2b997c82d10516a1c6057821f05f4cc0f4cc78cedb9513] <==
	I1001 19:15:24.183841       1 serving.go:386] Generated self-signed cert in-memory
	W1001 19:15:26.254511       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1001 19:15:26.254545       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1001 19:15:26.254555       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1001 19:15:26.254560       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1001 19:15:26.296291       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1001 19:15:26.296337       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:15:26.298366       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1001 19:15:26.298486       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 19:15:26.298513       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1001 19:15:26.298526       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1001 19:15:26.398713       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1001 19:15:52.652360       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1001 19:15:52.652574       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1001 19:15:52.654419       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1001 19:15:52.654710       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f9fdc640149899ab40c0830c96f3f1a37e5f66ad175458029198b780c17a0d32] <==
	I1001 19:16:05.294237       1 serving.go:386] Generated self-signed cert in-memory
	W1001 19:16:06.666679       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1001 19:16:06.666717       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1001 19:16:06.666727       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1001 19:16:06.666733       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1001 19:16:06.699825       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1001 19:16:06.699897       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:16:06.708915       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1001 19:16:06.709046       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 19:16:06.709077       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1001 19:16:06.709417       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1001 19:16:06.809260       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 01 19:18:13 functional-338309 kubelet[4921]: E1001 19:18:13.596397    4921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810293595635371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252069,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:18:13 functional-338309 kubelet[4921]: E1001 19:18:13.596465    4921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810293595635371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252069,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:18:23 functional-338309 kubelet[4921]: E1001 19:18:23.598925    4921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810303597604184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252069,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:18:23 functional-338309 kubelet[4921]: E1001 19:18:23.598961    4921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810303597604184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252069,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:18:33 functional-338309 kubelet[4921]: E1001 19:18:33.600865    4921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810313600540456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252069,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:18:33 functional-338309 kubelet[4921]: E1001 19:18:33.600941    4921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810313600540456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252069,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:18:43 functional-338309 kubelet[4921]: E1001 19:18:43.605571    4921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810323605187536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252069,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:18:43 functional-338309 kubelet[4921]: E1001 19:18:43.605930    4921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810323605187536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252069,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:18:53 functional-338309 kubelet[4921]: E1001 19:18:53.607985    4921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810333607760274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252069,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:18:53 functional-338309 kubelet[4921]: E1001 19:18:53.608021    4921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810333607760274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252069,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:19:03 functional-338309 kubelet[4921]: E1001 19:19:03.501503    4921 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 19:19:03 functional-338309 kubelet[4921]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 19:19:03 functional-338309 kubelet[4921]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 19:19:03 functional-338309 kubelet[4921]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 19:19:03 functional-338309 kubelet[4921]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 19:19:03 functional-338309 kubelet[4921]: E1001 19:19:03.609511    4921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810343609081433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252069,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:19:03 functional-338309 kubelet[4921]: E1001 19:19:03.609534    4921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810343609081433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252069,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:19:13 functional-338309 kubelet[4921]: E1001 19:19:13.611996    4921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810353611684673,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252069,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:19:13 functional-338309 kubelet[4921]: E1001 19:19:13.612031    4921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810353611684673,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252069,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:19:23 functional-338309 kubelet[4921]: E1001 19:19:23.613598    4921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810363613204666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252069,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:19:23 functional-338309 kubelet[4921]: E1001 19:19:23.613650    4921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810363613204666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252069,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:19:33 functional-338309 kubelet[4921]: E1001 19:19:33.615552    4921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810373615054397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252069,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:19:33 functional-338309 kubelet[4921]: E1001 19:19:33.615578    4921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810373615054397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252069,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:19:43 functional-338309 kubelet[4921]: E1001 19:19:43.618805    4921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810383618437946,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252069,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:19:43 functional-338309 kubelet[4921]: E1001 19:19:43.619182    4921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810383618437946,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:252069,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [439fea055431366a443cef48341c3008fdeb0d28c7c547f23d091080e6c38631] <==
	2024/10/01 19:17:11 Using namespace: kubernetes-dashboard
	2024/10/01 19:17:11 Using in-cluster config to connect to apiserver
	2024/10/01 19:17:11 Using secret token for csrf signing
	2024/10/01 19:17:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/10/01 19:17:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/10/01 19:17:11 Successful initial request to the apiserver, version: v1.31.1
	2024/10/01 19:17:11 Generating JWE encryption key
	2024/10/01 19:17:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/10/01 19:17:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/10/01 19:17:11 Initializing JWE encryption key from synchronized object
	2024/10/01 19:17:11 Creating in-cluster Sidecar client
	2024/10/01 19:17:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/01 19:17:11 Serving insecurely on HTTP port: 9090
	2024/10/01 19:17:41 Successful request to sidecar
	2024/10/01 19:17:11 Starting overwatch
	
	
	==> storage-provisioner [b128a35a281f6eff601083d538aaba63dfd5c9facfcb2be8db338c3476f0426a] <==
	I1001 19:15:27.238584       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 19:15:27.263989       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 19:15:27.269790       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 19:15:44.681729       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 19:15:44.682053       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-338309_35958c00-8038-4c91-bbd6-a93221f8e434!
	I1001 19:15:44.682531       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bcc60780-ec19-472c-9f33-99eded75fcd9", APIVersion:"v1", ResourceVersion:"553", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-338309_35958c00-8038-4c91-bbd6-a93221f8e434 became leader
	I1001 19:15:44.783326       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-338309_35958c00-8038-4c91-bbd6-a93221f8e434!
	
	
	==> storage-provisioner [f2bc7203452e500b37a60575cca742c03f776ff909abbf7d4bd78de49ab061bc] <==
	I1001 19:16:07.996323       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 19:16:08.035431       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 19:16:08.035753       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 19:16:25.436308       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 19:16:25.436567       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-338309_19f17ac9-de25-4860-b74a-8f11b059487c!
	I1001 19:16:25.437238       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bcc60780-ec19-472c-9f33-99eded75fcd9", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-338309_19f17ac9-de25-4860-b74a-8f11b059487c became leader
	I1001 19:16:25.537641       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-338309_19f17ac9-de25-4860-b74a-8f11b059487c!
	I1001 19:16:41.513892       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1001 19:16:41.513951       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    987ef8cd-69ac-45b6-a956-ad10b5c4f79a 369 0 2024-10-01 19:14:37 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-10-01 19:14:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-1fab0c57-07bb-4c3b-ad05-c2f541dffbec &PersistentVolumeClaim{ObjectMeta:{myclaim  default  1fab0c57-07bb-4c3b-ad05-c2f541dffbec 733 0 2024-10-01 19:16:41 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-10-01 19:16:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-10-01 19:16:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1001 19:16:41.514556       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-1fab0c57-07bb-4c3b-ad05-c2f541dffbec" provisioned
	I1001 19:16:41.514577       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1001 19:16:41.514588       1 volume_store.go:212] Trying to save persistentvolume "pvc-1fab0c57-07bb-4c3b-ad05-c2f541dffbec"
	I1001 19:16:41.518374       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"1fab0c57-07bb-4c3b-ad05-c2f541dffbec", APIVersion:"v1", ResourceVersion:"733", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1001 19:16:41.579235       1 volume_store.go:219] persistentvolume "pvc-1fab0c57-07bb-4c3b-ad05-c2f541dffbec" saved
	I1001 19:16:41.579433       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"1fab0c57-07bb-4c3b-ad05-c2f541dffbec", APIVersion:"v1", ResourceVersion:"733", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-1fab0c57-07bb-4c3b-ad05-c2f541dffbec
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-338309 -n functional-338309
helpers_test.go:261: (dbg) Run:  kubectl --context functional-338309 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-338309 describe pod busybox-mount sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-338309 describe pod busybox-mount sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-338309/192.168.50.74
	Start Time:       Tue, 01 Oct 2024 19:17:00 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  cri-o://01d9383d9dab31e0a93eda6e0efe9d35defc098085e1cdc99bd3fa37f1a01836
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 01 Oct 2024 19:17:04 +0000
	      Finished:     Tue, 01 Oct 2024 19:17:04 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2hq27 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-2hq27:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  2m46s  default-scheduler  Successfully assigned default/busybox-mount to functional-338309
	  Normal  Pulling    2m45s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     2m42s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.199s (3.2s including waiting). Image size: 4631262 bytes.
	  Normal  Created    2m42s  kubelet            Created container mount-munger
	  Normal  Started    2m42s  kubelet            Started container mount-munger
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-338309/192.168.50.74
	Start Time:       Tue, 01 Oct 2024 19:16:44 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8pnb5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-8pnb5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3m2s  default-scheduler  Successfully assigned default/sp-pod to functional-338309

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (190.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-193737 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.486121522s)

                                                
                                                
-- stdout --
	* Stopping node "ha-193737-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 19:24:27.098586   35671 out.go:345] Setting OutFile to fd 1 ...
	I1001 19:24:27.098746   35671 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:24:27.098756   35671 out.go:358] Setting ErrFile to fd 2...
	I1001 19:24:27.098760   35671 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:24:27.098951   35671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 19:24:27.099214   35671 mustload.go:65] Loading cluster: ha-193737
	I1001 19:24:27.099643   35671 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:24:27.099659   35671 stop.go:39] StopHost: ha-193737-m02
	I1001 19:24:27.100063   35671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:24:27.100131   35671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:24:27.118070   35671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40337
	I1001 19:24:27.118533   35671 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:24:27.119084   35671 main.go:141] libmachine: Using API Version  1
	I1001 19:24:27.119112   35671 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:24:27.119427   35671 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:24:27.121472   35671 out.go:177] * Stopping node "ha-193737-m02"  ...
	I1001 19:24:27.122699   35671 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1001 19:24:27.122740   35671 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:24:27.122942   35671 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1001 19:24:27.122970   35671 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:24:27.125874   35671 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:24:27.126238   35671 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:24:27.126275   35671 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:24:27.126390   35671 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:24:27.126551   35671 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:24:27.126723   35671 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:24:27.126855   35671 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa Username:docker}
	I1001 19:24:27.219490   35671 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1001 19:24:27.272513   35671 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1001 19:24:27.325976   35671 main.go:141] libmachine: Stopping "ha-193737-m02"...
	I1001 19:24:27.326033   35671 main.go:141] libmachine: (ha-193737-m02) Calling .GetState
	I1001 19:24:27.327531   35671 main.go:141] libmachine: (ha-193737-m02) Calling .Stop
	I1001 19:24:27.330740   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 0/120
	I1001 19:24:28.332278   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 1/120
	I1001 19:24:29.333525   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 2/120
	I1001 19:24:30.335240   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 3/120
	I1001 19:24:31.336414   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 4/120
	I1001 19:24:32.338389   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 5/120
	I1001 19:24:33.339931   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 6/120
	I1001 19:24:34.341586   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 7/120
	I1001 19:24:35.342944   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 8/120
	I1001 19:24:36.344465   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 9/120
	I1001 19:24:37.346397   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 10/120
	I1001 19:24:38.347515   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 11/120
	I1001 19:24:39.348826   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 12/120
	I1001 19:24:40.350222   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 13/120
	I1001 19:24:41.351578   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 14/120
	I1001 19:24:42.353985   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 15/120
	I1001 19:24:43.355477   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 16/120
	I1001 19:24:44.357120   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 17/120
	I1001 19:24:45.358757   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 18/120
	I1001 19:24:46.359974   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 19/120
	I1001 19:24:47.361979   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 20/120
	I1001 19:24:48.363706   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 21/120
	I1001 19:24:49.365125   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 22/120
	I1001 19:24:50.367366   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 23/120
	I1001 19:24:51.369341   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 24/120
	I1001 19:24:52.371275   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 25/120
	I1001 19:24:53.373124   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 26/120
	I1001 19:24:54.374573   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 27/120
	I1001 19:24:55.376290   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 28/120
	I1001 19:24:56.377878   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 29/120
	I1001 19:24:57.380061   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 30/120
	I1001 19:24:58.382090   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 31/120
	I1001 19:24:59.384410   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 32/120
	I1001 19:25:00.386177   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 33/120
	I1001 19:25:01.387634   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 34/120
	I1001 19:25:02.389543   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 35/120
	I1001 19:25:03.391489   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 36/120
	I1001 19:25:04.393639   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 37/120
	I1001 19:25:05.395603   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 38/120
	I1001 19:25:06.397236   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 39/120
	I1001 19:25:07.399236   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 40/120
	I1001 19:25:08.400608   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 41/120
	I1001 19:25:09.402772   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 42/120
	I1001 19:25:10.404420   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 43/120
	I1001 19:25:11.405924   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 44/120
	I1001 19:25:12.408227   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 45/120
	I1001 19:25:13.410598   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 46/120
	I1001 19:25:14.412881   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 47/120
	I1001 19:25:15.415048   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 48/120
	I1001 19:25:16.416812   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 49/120
	I1001 19:25:17.419336   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 50/120
	I1001 19:25:18.420881   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 51/120
	I1001 19:25:19.422781   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 52/120
	I1001 19:25:20.424928   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 53/120
	I1001 19:25:21.426994   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 54/120
	I1001 19:25:22.428742   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 55/120
	I1001 19:25:23.431001   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 56/120
	I1001 19:25:24.432510   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 57/120
	I1001 19:25:25.434870   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 58/120
	I1001 19:25:26.436514   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 59/120
	I1001 19:25:27.438738   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 60/120
	I1001 19:25:28.440315   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 61/120
	I1001 19:25:29.441889   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 62/120
	I1001 19:25:30.443345   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 63/120
	I1001 19:25:31.444636   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 64/120
	I1001 19:25:32.446600   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 65/120
	I1001 19:25:33.448102   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 66/120
	I1001 19:25:34.449674   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 67/120
	I1001 19:25:35.451852   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 68/120
	I1001 19:25:36.453366   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 69/120
	I1001 19:25:37.454720   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 70/120
	I1001 19:25:38.456433   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 71/120
	I1001 19:25:39.457900   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 72/120
	I1001 19:25:40.459427   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 73/120
	I1001 19:25:41.460878   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 74/120
	I1001 19:25:42.462741   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 75/120
	I1001 19:25:43.465316   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 76/120
	I1001 19:25:44.467332   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 77/120
	I1001 19:25:45.468906   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 78/120
	I1001 19:25:46.470822   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 79/120
	I1001 19:25:47.473095   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 80/120
	I1001 19:25:48.474634   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 81/120
	I1001 19:25:49.475894   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 82/120
	I1001 19:25:50.477850   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 83/120
	I1001 19:25:51.479030   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 84/120
	I1001 19:25:52.480933   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 85/120
	I1001 19:25:53.482260   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 86/120
	I1001 19:25:54.483713   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 87/120
	I1001 19:25:55.485279   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 88/120
	I1001 19:25:56.487129   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 89/120
	I1001 19:25:57.489424   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 90/120
	I1001 19:25:58.490986   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 91/120
	I1001 19:25:59.492326   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 92/120
	I1001 19:26:00.493547   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 93/120
	I1001 19:26:01.494949   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 94/120
	I1001 19:26:02.497155   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 95/120
	I1001 19:26:03.499000   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 96/120
	I1001 19:26:04.500900   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 97/120
	I1001 19:26:05.502544   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 98/120
	I1001 19:26:06.504980   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 99/120
	I1001 19:26:07.506815   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 100/120
	I1001 19:26:08.508218   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 101/120
	I1001 19:26:09.509750   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 102/120
	I1001 19:26:10.511441   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 103/120
	I1001 19:26:11.513914   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 104/120
	I1001 19:26:12.516309   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 105/120
	I1001 19:26:13.517602   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 106/120
	I1001 19:26:14.519238   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 107/120
	I1001 19:26:15.520899   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 108/120
	I1001 19:26:16.522979   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 109/120
	I1001 19:26:17.524832   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 110/120
	I1001 19:26:18.526869   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 111/120
	I1001 19:26:19.528504   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 112/120
	I1001 19:26:20.529901   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 113/120
	I1001 19:26:21.531540   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 114/120
	I1001 19:26:22.533499   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 115/120
	I1001 19:26:23.534863   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 116/120
	I1001 19:26:24.537062   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 117/120
	I1001 19:26:25.538673   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 118/120
	I1001 19:26:26.540151   35671 main.go:141] libmachine: (ha-193737-m02) Waiting for machine to stop 119/120
	I1001 19:26:27.541174   35671 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1001 19:26:27.541291   35671 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-193737 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 status -v=7 --alsologtostderr
E1001 19:26:34.844598   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-193737 status -v=7 --alsologtostderr: (18.719082054s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-193737 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-193737 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-193737 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-193737 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-193737 -n ha-193737
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-193737 logs -n 25: (1.359799348s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-193737 cp ha-193737-m03:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1122815483/001/cp-test_ha-193737-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m03:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737:/home/docker/cp-test_ha-193737-m03_ha-193737.txt                       |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737 sudo cat                                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m03_ha-193737.txt                                 |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m03:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m02:/home/docker/cp-test_ha-193737-m03_ha-193737-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737-m02 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m03_ha-193737-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m03:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04:/home/docker/cp-test_ha-193737-m03_ha-193737-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737-m04 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m03_ha-193737-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-193737 cp testdata/cp-test.txt                                                | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1122815483/001/cp-test_ha-193737-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737:/home/docker/cp-test_ha-193737-m04_ha-193737.txt                       |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737 sudo cat                                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m04_ha-193737.txt                                 |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m02:/home/docker/cp-test_ha-193737-m04_ha-193737-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737-m02 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m04_ha-193737-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03:/home/docker/cp-test_ha-193737-m04_ha-193737-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737-m03 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m04_ha-193737-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-193737 node stop m02 -v=7                                                     | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 19:19:47
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 19:19:47.806967   31154 out.go:345] Setting OutFile to fd 1 ...
	I1001 19:19:47.807072   31154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:19:47.807081   31154 out.go:358] Setting ErrFile to fd 2...
	I1001 19:19:47.807085   31154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:19:47.807300   31154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 19:19:47.807883   31154 out.go:352] Setting JSON to false
	I1001 19:19:47.808862   31154 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3730,"bootTime":1727806658,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 19:19:47.808959   31154 start.go:139] virtualization: kvm guest
	I1001 19:19:47.810915   31154 out.go:177] * [ha-193737] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 19:19:47.812033   31154 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 19:19:47.812047   31154 notify.go:220] Checking for updates...
	I1001 19:19:47.814140   31154 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 19:19:47.815207   31154 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:19:47.816467   31154 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:19:47.817736   31154 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 19:19:47.818886   31154 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 19:19:47.820159   31154 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 19:19:47.855456   31154 out.go:177] * Using the kvm2 driver based on user configuration
	I1001 19:19:47.856527   31154 start.go:297] selected driver: kvm2
	I1001 19:19:47.856547   31154 start.go:901] validating driver "kvm2" against <nil>
	I1001 19:19:47.856562   31154 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 19:19:47.857294   31154 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 19:19:47.857376   31154 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 19:19:47.872487   31154 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 19:19:47.872546   31154 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 19:19:47.872796   31154 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 19:19:47.872826   31154 cni.go:84] Creating CNI manager for ""
	I1001 19:19:47.872874   31154 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1001 19:19:47.872886   31154 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 19:19:47.872938   31154 start.go:340] cluster config:
	{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1001 19:19:47.873050   31154 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 19:19:47.874719   31154 out.go:177] * Starting "ha-193737" primary control-plane node in "ha-193737" cluster
	I1001 19:19:47.875804   31154 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:19:47.875840   31154 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 19:19:47.875850   31154 cache.go:56] Caching tarball of preloaded images
	I1001 19:19:47.875957   31154 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 19:19:47.875970   31154 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 19:19:47.876255   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:19:47.876273   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json: {Name:mk44677a1f0c01c3be022903d4a146ca8f437dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:19:47.876454   31154 start.go:360] acquireMachinesLock for ha-193737: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 19:19:47.876490   31154 start.go:364] duration metric: took 20.799µs to acquireMachinesLock for "ha-193737"
	I1001 19:19:47.876512   31154 start.go:93] Provisioning new machine with config: &{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:19:47.876581   31154 start.go:125] createHost starting for "" (driver="kvm2")
	I1001 19:19:47.878132   31154 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 19:19:47.878257   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:19:47.878301   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:19:47.892637   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32839
	I1001 19:19:47.893161   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:19:47.893766   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:19:47.893788   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:19:47.894083   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:19:47.894225   31154 main.go:141] libmachine: (ha-193737) Calling .GetMachineName
	I1001 19:19:47.894350   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:19:47.894482   31154 start.go:159] libmachine.API.Create for "ha-193737" (driver="kvm2")
	I1001 19:19:47.894506   31154 client.go:168] LocalClient.Create starting
	I1001 19:19:47.894539   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem
	I1001 19:19:47.894575   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:19:47.894607   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:19:47.894667   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem
	I1001 19:19:47.894686   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:19:47.894699   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:19:47.894713   31154 main.go:141] libmachine: Running pre-create checks...
	I1001 19:19:47.894730   31154 main.go:141] libmachine: (ha-193737) Calling .PreCreateCheck
	I1001 19:19:47.895057   31154 main.go:141] libmachine: (ha-193737) Calling .GetConfigRaw
	I1001 19:19:47.895392   31154 main.go:141] libmachine: Creating machine...
	I1001 19:19:47.895405   31154 main.go:141] libmachine: (ha-193737) Calling .Create
	I1001 19:19:47.895568   31154 main.go:141] libmachine: (ha-193737) Creating KVM machine...
	I1001 19:19:47.896749   31154 main.go:141] libmachine: (ha-193737) DBG | found existing default KVM network
	I1001 19:19:47.897409   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:47.897251   31177 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I1001 19:19:47.897459   31154 main.go:141] libmachine: (ha-193737) DBG | created network xml: 
	I1001 19:19:47.897477   31154 main.go:141] libmachine: (ha-193737) DBG | <network>
	I1001 19:19:47.897495   31154 main.go:141] libmachine: (ha-193737) DBG |   <name>mk-ha-193737</name>
	I1001 19:19:47.897509   31154 main.go:141] libmachine: (ha-193737) DBG |   <dns enable='no'/>
	I1001 19:19:47.897529   31154 main.go:141] libmachine: (ha-193737) DBG |   
	I1001 19:19:47.897549   31154 main.go:141] libmachine: (ha-193737) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1001 19:19:47.897562   31154 main.go:141] libmachine: (ha-193737) DBG |     <dhcp>
	I1001 19:19:47.897573   31154 main.go:141] libmachine: (ha-193737) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1001 19:19:47.897582   31154 main.go:141] libmachine: (ha-193737) DBG |     </dhcp>
	I1001 19:19:47.897589   31154 main.go:141] libmachine: (ha-193737) DBG |   </ip>
	I1001 19:19:47.897594   31154 main.go:141] libmachine: (ha-193737) DBG |   
	I1001 19:19:47.897599   31154 main.go:141] libmachine: (ha-193737) DBG | </network>
	I1001 19:19:47.897608   31154 main.go:141] libmachine: (ha-193737) DBG | 
	I1001 19:19:47.902355   31154 main.go:141] libmachine: (ha-193737) DBG | trying to create private KVM network mk-ha-193737 192.168.39.0/24...
	I1001 19:19:47.965826   31154 main.go:141] libmachine: (ha-193737) DBG | private KVM network mk-ha-193737 192.168.39.0/24 created
	I1001 19:19:47.965857   31154 main.go:141] libmachine: (ha-193737) Setting up store path in /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737 ...
	I1001 19:19:47.965875   31154 main.go:141] libmachine: (ha-193737) Building disk image from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 19:19:47.965943   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:47.965838   31177 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:19:47.966014   31154 main.go:141] libmachine: (ha-193737) Downloading /home/jenkins/minikube-integration/19736-11198/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 19:19:48.225463   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:48.225322   31177 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa...
	I1001 19:19:48.498755   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:48.498602   31177 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/ha-193737.rawdisk...
	I1001 19:19:48.498778   31154 main.go:141] libmachine: (ha-193737) DBG | Writing magic tar header
	I1001 19:19:48.498788   31154 main.go:141] libmachine: (ha-193737) DBG | Writing SSH key tar header
	I1001 19:19:48.498813   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:48.498738   31177 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737 ...
	I1001 19:19:48.498825   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737
	I1001 19:19:48.498844   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737 (perms=drwx------)
	I1001 19:19:48.498866   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines (perms=drwxr-xr-x)
	I1001 19:19:48.498875   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube (perms=drwxr-xr-x)
	I1001 19:19:48.498909   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines
	I1001 19:19:48.498961   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198 (perms=drwxrwxr-x)
	I1001 19:19:48.498975   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:19:48.498992   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198
	I1001 19:19:48.499012   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 19:19:48.499035   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 19:19:48.499048   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 19:19:48.499056   31154 main.go:141] libmachine: (ha-193737) Creating domain...
	I1001 19:19:48.499066   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins
	I1001 19:19:48.499074   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home
	I1001 19:19:48.499095   31154 main.go:141] libmachine: (ha-193737) DBG | Skipping /home - not owner
	I1001 19:19:48.500091   31154 main.go:141] libmachine: (ha-193737) define libvirt domain using xml: 
	I1001 19:19:48.500110   31154 main.go:141] libmachine: (ha-193737) <domain type='kvm'>
	I1001 19:19:48.500119   31154 main.go:141] libmachine: (ha-193737)   <name>ha-193737</name>
	I1001 19:19:48.500128   31154 main.go:141] libmachine: (ha-193737)   <memory unit='MiB'>2200</memory>
	I1001 19:19:48.500140   31154 main.go:141] libmachine: (ha-193737)   <vcpu>2</vcpu>
	I1001 19:19:48.500149   31154 main.go:141] libmachine: (ha-193737)   <features>
	I1001 19:19:48.500155   31154 main.go:141] libmachine: (ha-193737)     <acpi/>
	I1001 19:19:48.500161   31154 main.go:141] libmachine: (ha-193737)     <apic/>
	I1001 19:19:48.500166   31154 main.go:141] libmachine: (ha-193737)     <pae/>
	I1001 19:19:48.500178   31154 main.go:141] libmachine: (ha-193737)     
	I1001 19:19:48.500186   31154 main.go:141] libmachine: (ha-193737)   </features>
	I1001 19:19:48.500190   31154 main.go:141] libmachine: (ha-193737)   <cpu mode='host-passthrough'>
	I1001 19:19:48.500271   31154 main.go:141] libmachine: (ha-193737)   
	I1001 19:19:48.500322   31154 main.go:141] libmachine: (ha-193737)   </cpu>
	I1001 19:19:48.500344   31154 main.go:141] libmachine: (ha-193737)   <os>
	I1001 19:19:48.500376   31154 main.go:141] libmachine: (ha-193737)     <type>hvm</type>
	I1001 19:19:48.500385   31154 main.go:141] libmachine: (ha-193737)     <boot dev='cdrom'/>
	I1001 19:19:48.500394   31154 main.go:141] libmachine: (ha-193737)     <boot dev='hd'/>
	I1001 19:19:48.500402   31154 main.go:141] libmachine: (ha-193737)     <bootmenu enable='no'/>
	I1001 19:19:48.500407   31154 main.go:141] libmachine: (ha-193737)   </os>
	I1001 19:19:48.500422   31154 main.go:141] libmachine: (ha-193737)   <devices>
	I1001 19:19:48.500428   31154 main.go:141] libmachine: (ha-193737)     <disk type='file' device='cdrom'>
	I1001 19:19:48.500438   31154 main.go:141] libmachine: (ha-193737)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/boot2docker.iso'/>
	I1001 19:19:48.500448   31154 main.go:141] libmachine: (ha-193737)       <target dev='hdc' bus='scsi'/>
	I1001 19:19:48.500454   31154 main.go:141] libmachine: (ha-193737)       <readonly/>
	I1001 19:19:48.500461   31154 main.go:141] libmachine: (ha-193737)     </disk>
	I1001 19:19:48.500475   31154 main.go:141] libmachine: (ha-193737)     <disk type='file' device='disk'>
	I1001 19:19:48.500485   31154 main.go:141] libmachine: (ha-193737)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 19:19:48.500507   31154 main.go:141] libmachine: (ha-193737)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/ha-193737.rawdisk'/>
	I1001 19:19:48.500514   31154 main.go:141] libmachine: (ha-193737)       <target dev='hda' bus='virtio'/>
	I1001 19:19:48.500519   31154 main.go:141] libmachine: (ha-193737)     </disk>
	I1001 19:19:48.500525   31154 main.go:141] libmachine: (ha-193737)     <interface type='network'>
	I1001 19:19:48.500530   31154 main.go:141] libmachine: (ha-193737)       <source network='mk-ha-193737'/>
	I1001 19:19:48.500536   31154 main.go:141] libmachine: (ha-193737)       <model type='virtio'/>
	I1001 19:19:48.500541   31154 main.go:141] libmachine: (ha-193737)     </interface>
	I1001 19:19:48.500547   31154 main.go:141] libmachine: (ha-193737)     <interface type='network'>
	I1001 19:19:48.500552   31154 main.go:141] libmachine: (ha-193737)       <source network='default'/>
	I1001 19:19:48.500558   31154 main.go:141] libmachine: (ha-193737)       <model type='virtio'/>
	I1001 19:19:48.500570   31154 main.go:141] libmachine: (ha-193737)     </interface>
	I1001 19:19:48.500593   31154 main.go:141] libmachine: (ha-193737)     <serial type='pty'>
	I1001 19:19:48.500606   31154 main.go:141] libmachine: (ha-193737)       <target port='0'/>
	I1001 19:19:48.500616   31154 main.go:141] libmachine: (ha-193737)     </serial>
	I1001 19:19:48.500621   31154 main.go:141] libmachine: (ha-193737)     <console type='pty'>
	I1001 19:19:48.500632   31154 main.go:141] libmachine: (ha-193737)       <target type='serial' port='0'/>
	I1001 19:19:48.500644   31154 main.go:141] libmachine: (ha-193737)     </console>
	I1001 19:19:48.500651   31154 main.go:141] libmachine: (ha-193737)     <rng model='virtio'>
	I1001 19:19:48.500662   31154 main.go:141] libmachine: (ha-193737)       <backend model='random'>/dev/random</backend>
	I1001 19:19:48.500669   31154 main.go:141] libmachine: (ha-193737)     </rng>
	I1001 19:19:48.500674   31154 main.go:141] libmachine: (ha-193737)     
	I1001 19:19:48.500681   31154 main.go:141] libmachine: (ha-193737)     
	I1001 19:19:48.500687   31154 main.go:141] libmachine: (ha-193737)   </devices>
	I1001 19:19:48.500693   31154 main.go:141] libmachine: (ha-193737) </domain>
	I1001 19:19:48.500703   31154 main.go:141] libmachine: (ha-193737) 
	I1001 19:19:48.505062   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:e8:37:5d in network default
	I1001 19:19:48.505636   31154 main.go:141] libmachine: (ha-193737) Ensuring networks are active...
	I1001 19:19:48.505675   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:48.506541   31154 main.go:141] libmachine: (ha-193737) Ensuring network default is active
	I1001 19:19:48.506813   31154 main.go:141] libmachine: (ha-193737) Ensuring network mk-ha-193737 is active
	I1001 19:19:48.507255   31154 main.go:141] libmachine: (ha-193737) Getting domain xml...
	I1001 19:19:48.507904   31154 main.go:141] libmachine: (ha-193737) Creating domain...
	I1001 19:19:49.716659   31154 main.go:141] libmachine: (ha-193737) Waiting to get IP...
	I1001 19:19:49.717406   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:49.717831   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:49.717883   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:49.717825   31177 retry.go:31] will retry after 192.827447ms: waiting for machine to come up
	I1001 19:19:49.912407   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:49.912907   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:49.912957   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:49.912879   31177 retry.go:31] will retry after 258.269769ms: waiting for machine to come up
	I1001 19:19:50.172507   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:50.173033   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:50.173054   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:50.172948   31177 retry.go:31] will retry after 373.637188ms: waiting for machine to come up
	I1001 19:19:50.548615   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:50.549181   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:50.549210   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:50.549112   31177 retry.go:31] will retry after 430.626472ms: waiting for machine to come up
	I1001 19:19:50.981709   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:50.982164   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:50.982197   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:50.982117   31177 retry.go:31] will retry after 529.86174ms: waiting for machine to come up
	I1001 19:19:51.513872   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:51.514354   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:51.514379   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:51.514310   31177 retry.go:31] will retry after 925.92584ms: waiting for machine to come up
	I1001 19:19:52.441513   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:52.442015   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:52.442079   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:52.441913   31177 retry.go:31] will retry after 1.034076263s: waiting for machine to come up
	I1001 19:19:53.477995   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:53.478427   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:53.478449   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:53.478392   31177 retry.go:31] will retry after 1.13194403s: waiting for machine to come up
	I1001 19:19:54.612551   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:54.613118   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:54.613140   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:54.613054   31177 retry.go:31] will retry after 1.647034063s: waiting for machine to come up
	I1001 19:19:56.262733   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:56.263161   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:56.263186   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:56.263102   31177 retry.go:31] will retry after 1.500997099s: waiting for machine to come up
	I1001 19:19:57.765863   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:57.766323   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:57.766356   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:57.766274   31177 retry.go:31] will retry after 2.455749683s: waiting for machine to come up
	I1001 19:20:00.223334   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:00.223743   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:20:00.223759   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:20:00.223705   31177 retry.go:31] will retry after 2.437856543s: waiting for machine to come up
	I1001 19:20:02.664433   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:02.664809   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:20:02.664832   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:20:02.664763   31177 retry.go:31] will retry after 3.902681899s: waiting for machine to come up
	I1001 19:20:06.571440   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:06.571775   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:20:06.571797   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:20:06.571730   31177 retry.go:31] will retry after 5.423043301s: waiting for machine to come up
	I1001 19:20:11.999360   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:11.999779   31154 main.go:141] libmachine: (ha-193737) Found IP for machine: 192.168.39.14
	I1001 19:20:11.999815   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has current primary IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:11.999824   31154 main.go:141] libmachine: (ha-193737) Reserving static IP address...
	I1001 19:20:12.000199   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find host DHCP lease matching {name: "ha-193737", mac: "52:54:00:80:2b:09", ip: "192.168.39.14"} in network mk-ha-193737
	I1001 19:20:12.077653   31154 main.go:141] libmachine: (ha-193737) Reserved static IP address: 192.168.39.14
	I1001 19:20:12.077732   31154 main.go:141] libmachine: (ha-193737) DBG | Getting to WaitForSSH function...
	I1001 19:20:12.077743   31154 main.go:141] libmachine: (ha-193737) Waiting for SSH to be available...
	I1001 19:20:12.080321   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.080865   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:minikube Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.080898   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.081006   31154 main.go:141] libmachine: (ha-193737) DBG | Using SSH client type: external
	I1001 19:20:12.081047   31154 main.go:141] libmachine: (ha-193737) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa (-rw-------)
	I1001 19:20:12.081075   31154 main.go:141] libmachine: (ha-193737) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 19:20:12.081085   31154 main.go:141] libmachine: (ha-193737) DBG | About to run SSH command:
	I1001 19:20:12.081096   31154 main.go:141] libmachine: (ha-193737) DBG | exit 0
	I1001 19:20:12.208487   31154 main.go:141] libmachine: (ha-193737) DBG | SSH cmd err, output: <nil>: 
	I1001 19:20:12.208725   31154 main.go:141] libmachine: (ha-193737) KVM machine creation complete!
	I1001 19:20:12.209102   31154 main.go:141] libmachine: (ha-193737) Calling .GetConfigRaw
	I1001 19:20:12.209646   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:12.209809   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:12.209935   31154 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 19:20:12.209949   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:20:12.211166   31154 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 19:20:12.211190   31154 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 19:20:12.211195   31154 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 19:20:12.211201   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.213529   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.213857   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.213883   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.213972   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:12.214116   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.214264   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.214394   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:12.214556   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:12.214781   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:12.214795   31154 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 19:20:12.319892   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:20:12.319913   31154 main.go:141] libmachine: Detecting the provisioner...
	I1001 19:20:12.319921   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.322718   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.323165   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.323192   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.323331   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:12.323522   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.323695   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.323840   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:12.324072   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:12.324284   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:12.324296   31154 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 19:20:12.429264   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 19:20:12.429335   31154 main.go:141] libmachine: found compatible host: buildroot
	I1001 19:20:12.429344   31154 main.go:141] libmachine: Provisioning with buildroot...
	I1001 19:20:12.429358   31154 main.go:141] libmachine: (ha-193737) Calling .GetMachineName
	I1001 19:20:12.429572   31154 buildroot.go:166] provisioning hostname "ha-193737"
	I1001 19:20:12.429594   31154 main.go:141] libmachine: (ha-193737) Calling .GetMachineName
	I1001 19:20:12.429736   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.432551   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.432897   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.432926   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.433127   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:12.433317   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.433512   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.433661   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:12.433801   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:12.433993   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:12.434007   31154 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-193737 && echo "ha-193737" | sudo tee /etc/hostname
	I1001 19:20:12.557230   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-193737
	
	I1001 19:20:12.557264   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.560034   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.560377   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.560404   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.560580   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:12.560736   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.560897   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.561023   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:12.561173   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:12.561344   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:12.561360   31154 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-193737' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-193737/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-193737' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 19:20:12.673716   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:20:12.673759   31154 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 19:20:12.673797   31154 buildroot.go:174] setting up certificates
	I1001 19:20:12.673811   31154 provision.go:84] configureAuth start
	I1001 19:20:12.673825   31154 main.go:141] libmachine: (ha-193737) Calling .GetMachineName
	I1001 19:20:12.674136   31154 main.go:141] libmachine: (ha-193737) Calling .GetIP
	I1001 19:20:12.676892   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.677280   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.677321   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.677483   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.679978   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.680305   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.680326   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.680487   31154 provision.go:143] copyHostCerts
	I1001 19:20:12.680516   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:20:12.680561   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 19:20:12.680573   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:20:12.680654   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 19:20:12.680751   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:20:12.680775   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 19:20:12.680787   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:20:12.680824   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 19:20:12.680885   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:20:12.680909   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 19:20:12.680917   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:20:12.680951   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 19:20:12.681013   31154 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.ha-193737 san=[127.0.0.1 192.168.39.14 ha-193737 localhost minikube]
	I1001 19:20:12.842484   31154 provision.go:177] copyRemoteCerts
	I1001 19:20:12.842574   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 19:20:12.842621   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.845898   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.846287   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.846310   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.846561   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:12.846731   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.846941   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:12.847077   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:12.930698   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 19:20:12.930795   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 19:20:12.955852   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 19:20:12.955930   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1001 19:20:12.979656   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 19:20:12.979722   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 19:20:13.003473   31154 provision.go:87] duration metric: took 329.649424ms to configureAuth
	I1001 19:20:13.003500   31154 buildroot.go:189] setting minikube options for container-runtime
	I1001 19:20:13.003695   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:20:13.003768   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:13.006651   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.006965   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.006994   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.007204   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:13.007396   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.007569   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.007765   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:13.007963   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:13.008170   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:13.008194   31154 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 19:20:13.223895   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 19:20:13.223928   31154 main.go:141] libmachine: Checking connection to Docker...
	I1001 19:20:13.223938   31154 main.go:141] libmachine: (ha-193737) Calling .GetURL
	I1001 19:20:13.225295   31154 main.go:141] libmachine: (ha-193737) DBG | Using libvirt version 6000000
	I1001 19:20:13.227525   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.227866   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.227899   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.227999   31154 main.go:141] libmachine: Docker is up and running!
	I1001 19:20:13.228014   31154 main.go:141] libmachine: Reticulating splines...
	I1001 19:20:13.228022   31154 client.go:171] duration metric: took 25.333507515s to LocalClient.Create
	I1001 19:20:13.228041   31154 start.go:167] duration metric: took 25.333560566s to libmachine.API.Create "ha-193737"
	I1001 19:20:13.228050   31154 start.go:293] postStartSetup for "ha-193737" (driver="kvm2")
	I1001 19:20:13.228060   31154 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 19:20:13.228083   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:13.228317   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 19:20:13.228339   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:13.230391   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.230709   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.230732   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.230837   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:13.230988   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.231120   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:13.231290   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:13.314353   31154 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 19:20:13.318432   31154 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 19:20:13.318458   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 19:20:13.318541   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 19:20:13.318638   31154 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 19:20:13.318652   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /etc/ssl/certs/184302.pem
	I1001 19:20:13.318780   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 19:20:13.328571   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:20:13.353035   31154 start.go:296] duration metric: took 124.970717ms for postStartSetup
	I1001 19:20:13.353110   31154 main.go:141] libmachine: (ha-193737) Calling .GetConfigRaw
	I1001 19:20:13.353736   31154 main.go:141] libmachine: (ha-193737) Calling .GetIP
	I1001 19:20:13.356423   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.356817   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.356852   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.357086   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:20:13.357278   31154 start.go:128] duration metric: took 25.480687424s to createHost
	I1001 19:20:13.357297   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:13.359783   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.360160   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.360189   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.360384   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:13.360591   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.360774   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.360932   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:13.361105   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:13.361274   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:13.361289   31154 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 19:20:13.464991   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727810413.446268696
	
	I1001 19:20:13.465023   31154 fix.go:216] guest clock: 1727810413.446268696
	I1001 19:20:13.465037   31154 fix.go:229] Guest: 2024-10-01 19:20:13.446268696 +0000 UTC Remote: 2024-10-01 19:20:13.35728811 +0000 UTC m=+25.585126920 (delta=88.980586ms)
	I1001 19:20:13.465070   31154 fix.go:200] guest clock delta is within tolerance: 88.980586ms
	I1001 19:20:13.465076   31154 start.go:83] releasing machines lock for "ha-193737", held for 25.588575039s
	I1001 19:20:13.465101   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:13.465340   31154 main.go:141] libmachine: (ha-193737) Calling .GetIP
	I1001 19:20:13.468083   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.468419   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.468447   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.468613   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:13.469143   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:13.469301   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:13.469362   31154 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 19:20:13.469413   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:13.469528   31154 ssh_runner.go:195] Run: cat /version.json
	I1001 19:20:13.469548   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:13.471980   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.472049   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.472309   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.472339   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.472393   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.472414   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.472482   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:13.472622   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:13.472666   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.472784   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.472833   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:13.472925   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:13.472991   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:13.473062   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:13.597462   31154 ssh_runner.go:195] Run: systemctl --version
	I1001 19:20:13.603452   31154 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 19:20:13.764276   31154 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 19:20:13.770676   31154 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 19:20:13.770753   31154 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 19:20:13.785990   31154 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 19:20:13.786018   31154 start.go:495] detecting cgroup driver to use...
	I1001 19:20:13.786088   31154 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 19:20:13.802042   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 19:20:13.815442   31154 docker.go:217] disabling cri-docker service (if available) ...
	I1001 19:20:13.815514   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 19:20:13.829012   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 19:20:13.842769   31154 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 19:20:13.956694   31154 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 19:20:14.102874   31154 docker.go:233] disabling docker service ...
	I1001 19:20:14.102940   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 19:20:14.117261   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 19:20:14.129985   31154 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 19:20:14.273597   31154 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 19:20:14.384529   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 19:20:14.397753   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 19:20:14.415792   31154 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 19:20:14.415850   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.426007   31154 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 19:20:14.426087   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.436393   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.446247   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.456029   31154 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 19:20:14.466078   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.475781   31154 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.492551   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.502706   31154 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 19:20:14.512290   31154 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 19:20:14.512379   31154 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 19:20:14.525913   31154 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 19:20:14.535543   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:20:14.653960   31154 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 19:20:14.741173   31154 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 19:20:14.741263   31154 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 19:20:14.745800   31154 start.go:563] Will wait 60s for crictl version
	I1001 19:20:14.745869   31154 ssh_runner.go:195] Run: which crictl
	I1001 19:20:14.749449   31154 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 19:20:14.789074   31154 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 19:20:14.789159   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:20:14.820545   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:20:14.849920   31154 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 19:20:14.850894   31154 main.go:141] libmachine: (ha-193737) Calling .GetIP
	I1001 19:20:14.853389   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:14.853698   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:14.853724   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:14.853935   31154 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 19:20:14.857967   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:20:14.870673   31154 kubeadm.go:883] updating cluster {Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 19:20:14.870794   31154 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:20:14.870846   31154 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 19:20:14.901722   31154 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1001 19:20:14.901791   31154 ssh_runner.go:195] Run: which lz4
	I1001 19:20:14.905716   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1001 19:20:14.905869   31154 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 19:20:14.909954   31154 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 19:20:14.909985   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1001 19:20:16.176019   31154 crio.go:462] duration metric: took 1.270229445s to copy over tarball
	I1001 19:20:16.176091   31154 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 19:20:18.196924   31154 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.020807915s)
	I1001 19:20:18.196955   31154 crio.go:469] duration metric: took 2.020904101s to extract the tarball
	I1001 19:20:18.196963   31154 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 19:20:18.232395   31154 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 19:20:18.277292   31154 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 19:20:18.277310   31154 cache_images.go:84] Images are preloaded, skipping loading
	I1001 19:20:18.277317   31154 kubeadm.go:934] updating node { 192.168.39.14 8443 v1.31.1 crio true true} ...
	I1001 19:20:18.277404   31154 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-193737 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 19:20:18.277469   31154 ssh_runner.go:195] Run: crio config
	I1001 19:20:18.320909   31154 cni.go:84] Creating CNI manager for ""
	I1001 19:20:18.320940   31154 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1001 19:20:18.320955   31154 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 19:20:18.320983   31154 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.14 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-193737 NodeName:ha-193737 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 19:20:18.321130   31154 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-193737"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 19:20:18.321154   31154 kube-vip.go:115] generating kube-vip config ...
	I1001 19:20:18.321192   31154 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 19:20:18.337979   31154 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 19:20:18.338099   31154 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1001 19:20:18.338161   31154 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 19:20:18.347788   31154 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 19:20:18.347864   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1001 19:20:18.356907   31154 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1001 19:20:18.372922   31154 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 19:20:18.388904   31154 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1001 19:20:18.404938   31154 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1001 19:20:18.421257   31154 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 19:20:18.425122   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:20:18.436829   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:20:18.545073   31154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:20:18.560862   31154 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737 for IP: 192.168.39.14
	I1001 19:20:18.560887   31154 certs.go:194] generating shared ca certs ...
	I1001 19:20:18.560910   31154 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:18.561104   31154 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 19:20:18.561167   31154 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 19:20:18.561182   31154 certs.go:256] generating profile certs ...
	I1001 19:20:18.561249   31154 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key
	I1001 19:20:18.561277   31154 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt with IP's: []
	I1001 19:20:19.147252   31154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt ...
	I1001 19:20:19.147288   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt: {Name:mk6cc12194e2b1b488446b45fb57531c12b19cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.147481   31154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key ...
	I1001 19:20:19.147500   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key: {Name:mk1f7ee6c9ea6b8fcc952a031324588416a57469 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.147599   31154 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.c3487d2e
	I1001 19:20:19.147622   31154 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.c3487d2e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.14 192.168.39.254]
	I1001 19:20:19.274032   31154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.c3487d2e ...
	I1001 19:20:19.274061   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.c3487d2e: {Name:mk19f3cf4cd1f2fca54e40738408be6aa73421ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.274224   31154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.c3487d2e ...
	I1001 19:20:19.274242   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.c3487d2e: {Name:mk2ba24a36a70c8a6e47019bdcda573a26500b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.274335   31154 certs.go:381] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.c3487d2e -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt
	I1001 19:20:19.274441   31154 certs.go:385] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.c3487d2e -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key
	I1001 19:20:19.274522   31154 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key
	I1001 19:20:19.274541   31154 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt with IP's: []
	I1001 19:20:19.432987   31154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt ...
	I1001 19:20:19.433018   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt: {Name:mkaa29f743f43e700e39d0141b3a793971db9bab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.433198   31154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key ...
	I1001 19:20:19.433215   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key: {Name:mkda8f4e7f39ac52933dd1a3f0855317051465de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.433333   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 19:20:19.433358   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 19:20:19.433374   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 19:20:19.433394   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 19:20:19.433411   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 19:20:19.433428   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 19:20:19.433441   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 19:20:19.433457   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 19:20:19.433541   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 19:20:19.433593   31154 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 19:20:19.433606   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 19:20:19.433643   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 19:20:19.433673   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 19:20:19.433703   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 19:20:19.433758   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:20:19.433792   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem -> /usr/share/ca-certificates/18430.pem
	I1001 19:20:19.433812   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /usr/share/ca-certificates/184302.pem
	I1001 19:20:19.433830   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:20:19.434414   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 19:20:19.462971   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 19:20:19.486817   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 19:20:19.510214   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 19:20:19.536715   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1001 19:20:19.562219   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 19:20:19.587563   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 19:20:19.611975   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 19:20:19.635789   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 19:20:19.660541   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 19:20:19.686922   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 19:20:19.713247   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 19:20:19.737109   31154 ssh_runner.go:195] Run: openssl version
	I1001 19:20:19.743466   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 19:20:19.755116   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 19:20:19.760240   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 19:20:19.760326   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 19:20:19.767474   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 19:20:19.779182   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 19:20:19.790431   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 19:20:19.795533   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 19:20:19.795593   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 19:20:19.801533   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 19:20:19.812537   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 19:20:19.823577   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:20:19.828798   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:20:19.828870   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:20:19.835152   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 19:20:19.846376   31154 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 19:20:19.850628   31154 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 19:20:19.850680   31154 kubeadm.go:392] StartCluster: {Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:20:19.850761   31154 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 19:20:19.850812   31154 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 19:20:19.892830   31154 cri.go:89] found id: ""
	I1001 19:20:19.892895   31154 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 19:20:19.902960   31154 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 19:20:19.913367   31154 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 19:20:19.923292   31154 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 19:20:19.923330   31154 kubeadm.go:157] found existing configuration files:
	
	I1001 19:20:19.923388   31154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 19:20:19.932878   31154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 19:20:19.932945   31154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 19:20:19.943333   31154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 19:20:19.952676   31154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 19:20:19.952738   31154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 19:20:19.962992   31154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 19:20:19.972649   31154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 19:20:19.972735   31154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 19:20:19.982834   31154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 19:20:19.993409   31154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 19:20:19.993469   31154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 19:20:20.002988   31154 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 19:20:20.127435   31154 kubeadm.go:310] W1001 19:20:20.114172     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 19:20:20.128326   31154 kubeadm.go:310] W1001 19:20:20.115365     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 19:20:20.262781   31154 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 19:20:31.543814   31154 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 19:20:31.543907   31154 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 19:20:31.543995   31154 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 19:20:31.544073   31154 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 19:20:31.544148   31154 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 19:20:31.544203   31154 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 19:20:31.545532   31154 out.go:235]   - Generating certificates and keys ...
	I1001 19:20:31.545611   31154 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 19:20:31.545691   31154 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 19:20:31.545778   31154 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 19:20:31.545854   31154 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 19:20:31.545932   31154 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 19:20:31.546012   31154 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 19:20:31.546085   31154 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 19:20:31.546175   31154 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-193737 localhost] and IPs [192.168.39.14 127.0.0.1 ::1]
	I1001 19:20:31.546218   31154 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 19:20:31.546369   31154 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-193737 localhost] and IPs [192.168.39.14 127.0.0.1 ::1]
	I1001 19:20:31.546436   31154 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 19:20:31.546488   31154 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 19:20:31.546527   31154 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 19:20:31.546577   31154 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 19:20:31.546623   31154 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 19:20:31.546668   31154 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 19:20:31.546722   31154 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 19:20:31.546817   31154 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 19:20:31.546863   31154 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 19:20:31.546932   31154 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 19:20:31.547004   31154 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 19:20:31.549095   31154 out.go:235]   - Booting up control plane ...
	I1001 19:20:31.549193   31154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 19:20:31.549275   31154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 19:20:31.549365   31154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 19:20:31.549456   31154 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 19:20:31.549553   31154 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 19:20:31.549596   31154 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 19:20:31.549707   31154 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 19:20:31.549790   31154 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 19:20:31.549840   31154 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.357694ms
	I1001 19:20:31.549900   31154 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 19:20:31.549947   31154 kubeadm.go:310] [api-check] The API server is healthy after 6.04683454s
	I1001 19:20:31.550033   31154 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 19:20:31.550189   31154 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 19:20:31.550277   31154 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 19:20:31.550430   31154 kubeadm.go:310] [mark-control-plane] Marking the node ha-193737 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 19:20:31.550487   31154 kubeadm.go:310] [bootstrap-token] Using token: 7by4e8.7cs25dkxb8txjdft
	I1001 19:20:31.551753   31154 out.go:235]   - Configuring RBAC rules ...
	I1001 19:20:31.551859   31154 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 19:20:31.551994   31154 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 19:20:31.552131   31154 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 19:20:31.552254   31154 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 19:20:31.552369   31154 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 19:20:31.552467   31154 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 19:20:31.552576   31154 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 19:20:31.552620   31154 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 19:20:31.552661   31154 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 19:20:31.552670   31154 kubeadm.go:310] 
	I1001 19:20:31.552724   31154 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 19:20:31.552736   31154 kubeadm.go:310] 
	I1001 19:20:31.552812   31154 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 19:20:31.552820   31154 kubeadm.go:310] 
	I1001 19:20:31.552841   31154 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 19:20:31.552936   31154 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 19:20:31.553000   31154 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 19:20:31.553018   31154 kubeadm.go:310] 
	I1001 19:20:31.553076   31154 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 19:20:31.553082   31154 kubeadm.go:310] 
	I1001 19:20:31.553119   31154 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 19:20:31.553125   31154 kubeadm.go:310] 
	I1001 19:20:31.553165   31154 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 19:20:31.553231   31154 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 19:20:31.553309   31154 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 19:20:31.553319   31154 kubeadm.go:310] 
	I1001 19:20:31.553382   31154 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 19:20:31.553446   31154 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 19:20:31.553452   31154 kubeadm.go:310] 
	I1001 19:20:31.553515   31154 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7by4e8.7cs25dkxb8txjdft \
	I1001 19:20:31.553595   31154 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 \
	I1001 19:20:31.553612   31154 kubeadm.go:310] 	--control-plane 
	I1001 19:20:31.553616   31154 kubeadm.go:310] 
	I1001 19:20:31.553679   31154 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 19:20:31.553686   31154 kubeadm.go:310] 
	I1001 19:20:31.553757   31154 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7by4e8.7cs25dkxb8txjdft \
	I1001 19:20:31.553878   31154 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 
	I1001 19:20:31.553899   31154 cni.go:84] Creating CNI manager for ""
	I1001 19:20:31.553906   31154 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1001 19:20:31.555354   31154 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1001 19:20:31.556734   31154 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1001 19:20:31.562528   31154 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1001 19:20:31.562546   31154 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1001 19:20:31.584306   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1001 19:20:31.963746   31154 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 19:20:31.963826   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:31.963839   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-193737 minikube.k8s.io/updated_at=2024_10_01T19_20_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=ha-193737 minikube.k8s.io/primary=true
	I1001 19:20:32.001753   31154 ops.go:34] apiserver oom_adj: -16
	I1001 19:20:32.132202   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:32.632805   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:33.133195   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:33.633216   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:34.132915   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:34.632316   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:35.132491   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:35.632537   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:36.132620   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:36.218756   31154 kubeadm.go:1113] duration metric: took 4.255002576s to wait for elevateKubeSystemPrivileges
	I1001 19:20:36.218788   31154 kubeadm.go:394] duration metric: took 16.368111595s to StartCluster
	I1001 19:20:36.218804   31154 settings.go:142] acquiring lock: {Name:mkeb3fe64ed992373f048aef24eb0d675bcab60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:36.218873   31154 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:20:36.219494   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/kubeconfig: {Name:mk7ef19d546928001abf2478a3abe1d17765a591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:36.219713   31154 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:20:36.219727   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 19:20:36.219734   31154 start.go:241] waiting for startup goroutines ...
	I1001 19:20:36.219741   31154 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 19:20:36.219834   31154 addons.go:69] Setting storage-provisioner=true in profile "ha-193737"
	I1001 19:20:36.219856   31154 addons.go:234] Setting addon storage-provisioner=true in "ha-193737"
	I1001 19:20:36.219869   31154 addons.go:69] Setting default-storageclass=true in profile "ha-193737"
	I1001 19:20:36.219886   31154 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-193737"
	I1001 19:20:36.219893   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:20:36.219970   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:20:36.220394   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:36.220428   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:36.220398   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:36.220520   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:36.237915   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41109
	I1001 19:20:36.238065   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40357
	I1001 19:20:36.238375   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:36.238551   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:36.238872   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:36.238891   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:36.239076   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:36.239108   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:36.239214   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:36.239454   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:36.239611   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:20:36.239781   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:36.239809   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:36.241737   31154 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:20:36.241972   31154 kapi.go:59] client config for ha-193737: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key", CAFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 19:20:36.242414   31154 cert_rotation.go:140] Starting client certificate rotation controller
	I1001 19:20:36.242541   31154 addons.go:234] Setting addon default-storageclass=true in "ha-193737"
	I1001 19:20:36.242580   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:20:36.242883   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:36.242931   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:36.258780   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39715
	I1001 19:20:36.259292   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:36.259824   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:36.259850   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:36.260262   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:36.260587   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:20:36.262369   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37495
	I1001 19:20:36.262435   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:36.263083   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:36.263600   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:36.263628   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:36.264019   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:36.264582   31154 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 19:20:36.264749   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:36.264788   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:36.265963   31154 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 19:20:36.265987   31154 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 19:20:36.266008   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:36.270544   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:36.271199   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:36.271222   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:36.271425   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:36.271642   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:36.271818   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:36.272058   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:36.283812   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42859
	I1001 19:20:36.284387   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:36.284896   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:36.284913   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:36.285508   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:36.285834   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:20:36.288106   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:36.288393   31154 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 19:20:36.288414   31154 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 19:20:36.288437   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:36.291938   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:36.292436   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:36.292463   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:36.292681   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:36.292858   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:36.293020   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:36.293164   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:36.379914   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 19:20:36.401549   31154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 19:20:36.450371   31154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 19:20:36.756603   31154 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1001 19:20:37.190467   31154 main.go:141] libmachine: Making call to close driver server
	I1001 19:20:37.190501   31154 main.go:141] libmachine: (ha-193737) Calling .Close
	I1001 19:20:37.190537   31154 main.go:141] libmachine: Making call to close driver server
	I1001 19:20:37.190556   31154 main.go:141] libmachine: (ha-193737) Calling .Close
	I1001 19:20:37.190812   31154 main.go:141] libmachine: Successfully made call to close driver server
	I1001 19:20:37.190821   31154 main.go:141] libmachine: Successfully made call to close driver server
	I1001 19:20:37.190830   31154 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 19:20:37.190833   31154 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 19:20:37.190839   31154 main.go:141] libmachine: Making call to close driver server
	I1001 19:20:37.190841   31154 main.go:141] libmachine: Making call to close driver server
	I1001 19:20:37.190847   31154 main.go:141] libmachine: (ha-193737) Calling .Close
	I1001 19:20:37.190848   31154 main.go:141] libmachine: (ha-193737) Calling .Close
	I1001 19:20:37.191111   31154 main.go:141] libmachine: Successfully made call to close driver server
	I1001 19:20:37.191115   31154 main.go:141] libmachine: Successfully made call to close driver server
	I1001 19:20:37.191125   31154 main.go:141] libmachine: (ha-193737) DBG | Closing plugin on server side
	I1001 19:20:37.191134   31154 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 19:20:37.191134   31154 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 19:20:37.191205   31154 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1001 19:20:37.191222   31154 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1001 19:20:37.191338   31154 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1001 19:20:37.191344   31154 round_trippers.go:469] Request Headers:
	I1001 19:20:37.191354   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:20:37.191358   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:20:37.219411   31154 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I1001 19:20:37.219983   31154 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1001 19:20:37.219997   31154 round_trippers.go:469] Request Headers:
	I1001 19:20:37.220005   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:20:37.220008   31154 round_trippers.go:473]     Content-Type: application/json
	I1001 19:20:37.220011   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:20:37.228402   31154 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1001 19:20:37.228596   31154 main.go:141] libmachine: Making call to close driver server
	I1001 19:20:37.228610   31154 main.go:141] libmachine: (ha-193737) Calling .Close
	I1001 19:20:37.228929   31154 main.go:141] libmachine: Successfully made call to close driver server
	I1001 19:20:37.228950   31154 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 19:20:37.228974   31154 main.go:141] libmachine: (ha-193737) DBG | Closing plugin on server side
	I1001 19:20:37.230600   31154 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1001 19:20:37.231770   31154 addons.go:510] duration metric: took 1.012023889s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1001 19:20:37.231812   31154 start.go:246] waiting for cluster config update ...
	I1001 19:20:37.231823   31154 start.go:255] writing updated cluster config ...
	I1001 19:20:37.233187   31154 out.go:201] 
	I1001 19:20:37.234563   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:20:37.234629   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:20:37.236253   31154 out.go:177] * Starting "ha-193737-m02" control-plane node in "ha-193737" cluster
	I1001 19:20:37.237974   31154 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:20:37.238007   31154 cache.go:56] Caching tarball of preloaded images
	I1001 19:20:37.238089   31154 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 19:20:37.238106   31154 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 19:20:37.238204   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:20:37.238426   31154 start.go:360] acquireMachinesLock for ha-193737-m02: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 19:20:37.238490   31154 start.go:364] duration metric: took 37.598µs to acquireMachinesLock for "ha-193737-m02"
	I1001 19:20:37.238511   31154 start.go:93] Provisioning new machine with config: &{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:20:37.238603   31154 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1001 19:20:37.240050   31154 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 19:20:37.240148   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:37.240181   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:37.256492   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43983
	I1001 19:20:37.257003   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:37.257628   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:37.257663   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:37.258069   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:37.258273   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetMachineName
	I1001 19:20:37.258413   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:37.258584   31154 start.go:159] libmachine.API.Create for "ha-193737" (driver="kvm2")
	I1001 19:20:37.258609   31154 client.go:168] LocalClient.Create starting
	I1001 19:20:37.258644   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem
	I1001 19:20:37.258691   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:20:37.258706   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:20:37.258752   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem
	I1001 19:20:37.258775   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:20:37.258791   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:20:37.258820   31154 main.go:141] libmachine: Running pre-create checks...
	I1001 19:20:37.258831   31154 main.go:141] libmachine: (ha-193737-m02) Calling .PreCreateCheck
	I1001 19:20:37.258981   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetConfigRaw
	I1001 19:20:37.259499   31154 main.go:141] libmachine: Creating machine...
	I1001 19:20:37.259521   31154 main.go:141] libmachine: (ha-193737-m02) Calling .Create
	I1001 19:20:37.259645   31154 main.go:141] libmachine: (ha-193737-m02) Creating KVM machine...
	I1001 19:20:37.261171   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found existing default KVM network
	I1001 19:20:37.261376   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found existing private KVM network mk-ha-193737
	I1001 19:20:37.261582   31154 main.go:141] libmachine: (ha-193737-m02) Setting up store path in /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02 ...
	I1001 19:20:37.261615   31154 main.go:141] libmachine: (ha-193737-m02) Building disk image from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 19:20:37.261632   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:37.261518   31541 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:20:37.261750   31154 main.go:141] libmachine: (ha-193737-m02) Downloading /home/jenkins/minikube-integration/19736-11198/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 19:20:37.511803   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:37.511639   31541 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa...
	I1001 19:20:37.705703   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:37.705550   31541 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/ha-193737-m02.rawdisk...
	I1001 19:20:37.705738   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Writing magic tar header
	I1001 19:20:37.705753   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Writing SSH key tar header
	I1001 19:20:37.705765   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:37.705670   31541 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02 ...
	I1001 19:20:37.705777   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02 (perms=drwx------)
	I1001 19:20:37.705791   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines (perms=drwxr-xr-x)
	I1001 19:20:37.705802   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02
	I1001 19:20:37.705808   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube (perms=drwxr-xr-x)
	I1001 19:20:37.705819   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198 (perms=drwxrwxr-x)
	I1001 19:20:37.705827   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 19:20:37.705840   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 19:20:37.705857   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines
	I1001 19:20:37.705865   31154 main.go:141] libmachine: (ha-193737-m02) Creating domain...
	I1001 19:20:37.705882   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:20:37.705895   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198
	I1001 19:20:37.705908   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 19:20:37.705917   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins
	I1001 19:20:37.705926   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home
	I1001 19:20:37.705934   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Skipping /home - not owner
	I1001 19:20:37.706847   31154 main.go:141] libmachine: (ha-193737-m02) define libvirt domain using xml: 
	I1001 19:20:37.706866   31154 main.go:141] libmachine: (ha-193737-m02) <domain type='kvm'>
	I1001 19:20:37.706875   31154 main.go:141] libmachine: (ha-193737-m02)   <name>ha-193737-m02</name>
	I1001 19:20:37.706882   31154 main.go:141] libmachine: (ha-193737-m02)   <memory unit='MiB'>2200</memory>
	I1001 19:20:37.706889   31154 main.go:141] libmachine: (ha-193737-m02)   <vcpu>2</vcpu>
	I1001 19:20:37.706899   31154 main.go:141] libmachine: (ha-193737-m02)   <features>
	I1001 19:20:37.706907   31154 main.go:141] libmachine: (ha-193737-m02)     <acpi/>
	I1001 19:20:37.706913   31154 main.go:141] libmachine: (ha-193737-m02)     <apic/>
	I1001 19:20:37.706921   31154 main.go:141] libmachine: (ha-193737-m02)     <pae/>
	I1001 19:20:37.706927   31154 main.go:141] libmachine: (ha-193737-m02)     
	I1001 19:20:37.706935   31154 main.go:141] libmachine: (ha-193737-m02)   </features>
	I1001 19:20:37.706943   31154 main.go:141] libmachine: (ha-193737-m02)   <cpu mode='host-passthrough'>
	I1001 19:20:37.706947   31154 main.go:141] libmachine: (ha-193737-m02)   
	I1001 19:20:37.706951   31154 main.go:141] libmachine: (ha-193737-m02)   </cpu>
	I1001 19:20:37.706958   31154 main.go:141] libmachine: (ha-193737-m02)   <os>
	I1001 19:20:37.706963   31154 main.go:141] libmachine: (ha-193737-m02)     <type>hvm</type>
	I1001 19:20:37.706969   31154 main.go:141] libmachine: (ha-193737-m02)     <boot dev='cdrom'/>
	I1001 19:20:37.706979   31154 main.go:141] libmachine: (ha-193737-m02)     <boot dev='hd'/>
	I1001 19:20:37.706999   31154 main.go:141] libmachine: (ha-193737-m02)     <bootmenu enable='no'/>
	I1001 19:20:37.707014   31154 main.go:141] libmachine: (ha-193737-m02)   </os>
	I1001 19:20:37.707026   31154 main.go:141] libmachine: (ha-193737-m02)   <devices>
	I1001 19:20:37.707037   31154 main.go:141] libmachine: (ha-193737-m02)     <disk type='file' device='cdrom'>
	I1001 19:20:37.707052   31154 main.go:141] libmachine: (ha-193737-m02)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/boot2docker.iso'/>
	I1001 19:20:37.707067   31154 main.go:141] libmachine: (ha-193737-m02)       <target dev='hdc' bus='scsi'/>
	I1001 19:20:37.707078   31154 main.go:141] libmachine: (ha-193737-m02)       <readonly/>
	I1001 19:20:37.707090   31154 main.go:141] libmachine: (ha-193737-m02)     </disk>
	I1001 19:20:37.707105   31154 main.go:141] libmachine: (ha-193737-m02)     <disk type='file' device='disk'>
	I1001 19:20:37.707118   31154 main.go:141] libmachine: (ha-193737-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 19:20:37.707132   31154 main.go:141] libmachine: (ha-193737-m02)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/ha-193737-m02.rawdisk'/>
	I1001 19:20:37.707142   31154 main.go:141] libmachine: (ha-193737-m02)       <target dev='hda' bus='virtio'/>
	I1001 19:20:37.707150   31154 main.go:141] libmachine: (ha-193737-m02)     </disk>
	I1001 19:20:37.707164   31154 main.go:141] libmachine: (ha-193737-m02)     <interface type='network'>
	I1001 19:20:37.707176   31154 main.go:141] libmachine: (ha-193737-m02)       <source network='mk-ha-193737'/>
	I1001 19:20:37.707186   31154 main.go:141] libmachine: (ha-193737-m02)       <model type='virtio'/>
	I1001 19:20:37.707196   31154 main.go:141] libmachine: (ha-193737-m02)     </interface>
	I1001 19:20:37.707206   31154 main.go:141] libmachine: (ha-193737-m02)     <interface type='network'>
	I1001 19:20:37.707217   31154 main.go:141] libmachine: (ha-193737-m02)       <source network='default'/>
	I1001 19:20:37.707227   31154 main.go:141] libmachine: (ha-193737-m02)       <model type='virtio'/>
	I1001 19:20:37.707241   31154 main.go:141] libmachine: (ha-193737-m02)     </interface>
	I1001 19:20:37.707259   31154 main.go:141] libmachine: (ha-193737-m02)     <serial type='pty'>
	I1001 19:20:37.707267   31154 main.go:141] libmachine: (ha-193737-m02)       <target port='0'/>
	I1001 19:20:37.707272   31154 main.go:141] libmachine: (ha-193737-m02)     </serial>
	I1001 19:20:37.707279   31154 main.go:141] libmachine: (ha-193737-m02)     <console type='pty'>
	I1001 19:20:37.707283   31154 main.go:141] libmachine: (ha-193737-m02)       <target type='serial' port='0'/>
	I1001 19:20:37.707290   31154 main.go:141] libmachine: (ha-193737-m02)     </console>
	I1001 19:20:37.707295   31154 main.go:141] libmachine: (ha-193737-m02)     <rng model='virtio'>
	I1001 19:20:37.707303   31154 main.go:141] libmachine: (ha-193737-m02)       <backend model='random'>/dev/random</backend>
	I1001 19:20:37.707306   31154 main.go:141] libmachine: (ha-193737-m02)     </rng>
	I1001 19:20:37.707313   31154 main.go:141] libmachine: (ha-193737-m02)     
	I1001 19:20:37.707317   31154 main.go:141] libmachine: (ha-193737-m02)     
	I1001 19:20:37.707323   31154 main.go:141] libmachine: (ha-193737-m02)   </devices>
	I1001 19:20:37.707331   31154 main.go:141] libmachine: (ha-193737-m02) </domain>
	I1001 19:20:37.707362   31154 main.go:141] libmachine: (ha-193737-m02) 
	I1001 19:20:37.714050   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:2e:69:af in network default
	I1001 19:20:37.714587   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:37.714605   31154 main.go:141] libmachine: (ha-193737-m02) Ensuring networks are active...
	I1001 19:20:37.715386   31154 main.go:141] libmachine: (ha-193737-m02) Ensuring network default is active
	I1001 19:20:37.715688   31154 main.go:141] libmachine: (ha-193737-m02) Ensuring network mk-ha-193737 is active
	I1001 19:20:37.716026   31154 main.go:141] libmachine: (ha-193737-m02) Getting domain xml...
	I1001 19:20:37.716683   31154 main.go:141] libmachine: (ha-193737-m02) Creating domain...
	I1001 19:20:38.946823   31154 main.go:141] libmachine: (ha-193737-m02) Waiting to get IP...
	I1001 19:20:38.947612   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:38.948069   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:38.948111   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:38.948057   31541 retry.go:31] will retry after 211.487702ms: waiting for machine to come up
	I1001 19:20:39.161472   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:39.161945   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:39.161981   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:39.161920   31541 retry.go:31] will retry after 369.29813ms: waiting for machine to come up
	I1001 19:20:39.532486   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:39.533006   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:39.533034   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:39.532951   31541 retry.go:31] will retry after 340.79833ms: waiting for machine to come up
	I1001 19:20:39.875453   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:39.875902   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:39.875928   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:39.875855   31541 retry.go:31] will retry after 558.36179ms: waiting for machine to come up
	I1001 19:20:40.435617   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:40.436128   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:40.436156   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:40.436070   31541 retry.go:31] will retry after 724.412456ms: waiting for machine to come up
	I1001 19:20:41.161753   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:41.162215   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:41.162238   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:41.162183   31541 retry.go:31] will retry after 921.122771ms: waiting for machine to come up
	I1001 19:20:42.085509   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:42.085978   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:42.086002   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:42.085932   31541 retry.go:31] will retry after 886.914683ms: waiting for machine to come up
	I1001 19:20:42.974460   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:42.974900   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:42.974926   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:42.974856   31541 retry.go:31] will retry after 1.455695023s: waiting for machine to come up
	I1001 19:20:44.432773   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:44.433336   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:44.433365   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:44.433292   31541 retry.go:31] will retry after 1.415796379s: waiting for machine to come up
	I1001 19:20:45.850938   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:45.851337   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:45.851357   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:45.851309   31541 retry.go:31] will retry after 1.972979972s: waiting for machine to come up
	I1001 19:20:47.825356   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:47.825785   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:47.825812   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:47.825732   31541 retry.go:31] will retry after 1.92262401s: waiting for machine to come up
	I1001 19:20:49.750763   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:49.751160   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:49.751177   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:49.751137   31541 retry.go:31] will retry after 3.587777506s: waiting for machine to come up
	I1001 19:20:53.340173   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:53.340566   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:53.340617   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:53.340558   31541 retry.go:31] will retry after 3.748563727s: waiting for machine to come up
	I1001 19:20:57.093502   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.094007   31154 main.go:141] libmachine: (ha-193737-m02) Found IP for machine: 192.168.39.27
	I1001 19:20:57.094023   31154 main.go:141] libmachine: (ha-193737-m02) Reserving static IP address...
	I1001 19:20:57.094037   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has current primary IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.094391   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find host DHCP lease matching {name: "ha-193737-m02", mac: "52:54:00:7b:e4:d4", ip: "192.168.39.27"} in network mk-ha-193737
	I1001 19:20:57.171234   31154 main.go:141] libmachine: (ha-193737-m02) Reserved static IP address: 192.168.39.27
	I1001 19:20:57.171257   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Getting to WaitForSSH function...
	I1001 19:20:57.171265   31154 main.go:141] libmachine: (ha-193737-m02) Waiting for SSH to be available...
	I1001 19:20:57.173965   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.174561   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.174594   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.174717   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Using SSH client type: external
	I1001 19:20:57.174748   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa (-rw-------)
	I1001 19:20:57.174779   31154 main.go:141] libmachine: (ha-193737-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.27 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 19:20:57.174794   31154 main.go:141] libmachine: (ha-193737-m02) DBG | About to run SSH command:
	I1001 19:20:57.174810   31154 main.go:141] libmachine: (ha-193737-m02) DBG | exit 0
	I1001 19:20:57.304572   31154 main.go:141] libmachine: (ha-193737-m02) DBG | SSH cmd err, output: <nil>: 
	I1001 19:20:57.304868   31154 main.go:141] libmachine: (ha-193737-m02) KVM machine creation complete!
	I1001 19:20:57.305162   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetConfigRaw
	I1001 19:20:57.305752   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:57.305953   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:57.306163   31154 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 19:20:57.306232   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetState
	I1001 19:20:57.307715   31154 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 19:20:57.307729   31154 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 19:20:57.307736   31154 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 19:20:57.307743   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:57.310409   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.310801   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.310826   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.310956   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:57.311136   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.311267   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.311408   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:57.311603   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:57.311799   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:57.311811   31154 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 19:20:57.423687   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:20:57.423716   31154 main.go:141] libmachine: Detecting the provisioner...
	I1001 19:20:57.423741   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:57.426918   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.427323   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.427358   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.427583   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:57.427788   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.428027   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.428201   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:57.428392   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:57.428632   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:57.428762   31154 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 19:20:57.541173   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 19:20:57.541232   31154 main.go:141] libmachine: found compatible host: buildroot
	I1001 19:20:57.541238   31154 main.go:141] libmachine: Provisioning with buildroot...
	I1001 19:20:57.541245   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetMachineName
	I1001 19:20:57.541504   31154 buildroot.go:166] provisioning hostname "ha-193737-m02"
	I1001 19:20:57.541527   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetMachineName
	I1001 19:20:57.541689   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:57.544406   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.544791   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.544830   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.544962   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:57.545135   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.545283   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.545382   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:57.545543   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:57.545753   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:57.545769   31154 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-193737-m02 && echo "ha-193737-m02" | sudo tee /etc/hostname
	I1001 19:20:57.675116   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-193737-m02
	
	I1001 19:20:57.675147   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:57.678239   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.678600   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.678624   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.678822   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:57.679011   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.679146   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.679254   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:57.679397   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:57.679573   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:57.679599   31154 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-193737-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-193737-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-193737-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 19:20:57.800899   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:20:57.800928   31154 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 19:20:57.800946   31154 buildroot.go:174] setting up certificates
	I1001 19:20:57.800957   31154 provision.go:84] configureAuth start
	I1001 19:20:57.800969   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetMachineName
	I1001 19:20:57.801194   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetIP
	I1001 19:20:57.803613   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.803954   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.803982   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.804134   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:57.806340   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.806657   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.806678   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.806860   31154 provision.go:143] copyHostCerts
	I1001 19:20:57.806892   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:20:57.806929   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 19:20:57.806937   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:20:57.807013   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 19:20:57.807084   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:20:57.807101   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 19:20:57.807107   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:20:57.807131   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 19:20:57.807178   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:20:57.807196   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 19:20:57.807202   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:20:57.807221   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 19:20:57.807269   31154 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.ha-193737-m02 san=[127.0.0.1 192.168.39.27 ha-193737-m02 localhost minikube]
	I1001 19:20:58.056549   31154 provision.go:177] copyRemoteCerts
	I1001 19:20:58.056608   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 19:20:58.056631   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:58.059291   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.059620   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.059653   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.059823   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.060033   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.060174   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.060291   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa Username:docker}
	I1001 19:20:58.146502   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 19:20:58.146577   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 19:20:58.170146   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 19:20:58.170211   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 19:20:58.193090   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 19:20:58.193172   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 19:20:58.215033   31154 provision.go:87] duration metric: took 414.061487ms to configureAuth
	I1001 19:20:58.215067   31154 buildroot.go:189] setting minikube options for container-runtime
	I1001 19:20:58.215250   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:20:58.215327   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:58.218149   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.218497   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.218527   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.218653   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.218868   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.219033   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.219156   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.219300   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:58.219460   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:58.219473   31154 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 19:20:58.470145   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 19:20:58.470178   31154 main.go:141] libmachine: Checking connection to Docker...
	I1001 19:20:58.470189   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetURL
	I1001 19:20:58.471402   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Using libvirt version 6000000
	I1001 19:20:58.474024   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.474371   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.474412   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.474613   31154 main.go:141] libmachine: Docker is up and running!
	I1001 19:20:58.474631   31154 main.go:141] libmachine: Reticulating splines...
	I1001 19:20:58.474639   31154 client.go:171] duration metric: took 21.216022282s to LocalClient.Create
	I1001 19:20:58.474664   31154 start.go:167] duration metric: took 21.216081227s to libmachine.API.Create "ha-193737"
	I1001 19:20:58.474674   31154 start.go:293] postStartSetup for "ha-193737-m02" (driver="kvm2")
	I1001 19:20:58.474687   31154 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 19:20:58.474711   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:58.475026   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 19:20:58.475056   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:58.477612   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.478051   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.478084   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.478170   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.478359   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.478475   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.478613   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa Username:docker}
	I1001 19:20:58.566449   31154 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 19:20:58.570622   31154 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 19:20:58.570648   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 19:20:58.570715   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 19:20:58.570786   31154 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 19:20:58.570798   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /etc/ssl/certs/184302.pem
	I1001 19:20:58.570944   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 19:20:58.579535   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:20:58.601457   31154 start.go:296] duration metric: took 126.771104ms for postStartSetup
	I1001 19:20:58.601513   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetConfigRaw
	I1001 19:20:58.602068   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetIP
	I1001 19:20:58.604495   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.604874   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.604900   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.605223   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:20:58.605434   31154 start.go:128] duration metric: took 21.366818669s to createHost
	I1001 19:20:58.605467   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:58.607650   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.608026   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.608051   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.608184   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.608337   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.608453   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.608557   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.608693   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:58.608837   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:58.608847   31154 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 19:20:58.721980   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727810458.681508368
	
	I1001 19:20:58.722008   31154 fix.go:216] guest clock: 1727810458.681508368
	I1001 19:20:58.722018   31154 fix.go:229] Guest: 2024-10-01 19:20:58.681508368 +0000 UTC Remote: 2024-10-01 19:20:58.605448095 +0000 UTC m=+70.833286913 (delta=76.060273ms)
	I1001 19:20:58.722040   31154 fix.go:200] guest clock delta is within tolerance: 76.060273ms
	I1001 19:20:58.722049   31154 start.go:83] releasing machines lock for "ha-193737-m02", held for 21.483548504s
	I1001 19:20:58.722074   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:58.722316   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetIP
	I1001 19:20:58.725092   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.725406   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.725439   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.727497   31154 out.go:177] * Found network options:
	I1001 19:20:58.728546   31154 out.go:177]   - NO_PROXY=192.168.39.14
	W1001 19:20:58.729434   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 19:20:58.729479   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:58.729929   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:58.730082   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:58.730149   31154 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 19:20:58.730189   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	W1001 19:20:58.730253   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 19:20:58.730326   31154 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 19:20:58.730347   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:58.732847   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.732897   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.733209   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.733238   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.733263   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.733277   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.733405   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.733481   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.733618   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.733656   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.733727   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.733802   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.733822   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa Username:docker}
	I1001 19:20:58.733934   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa Username:docker}
	I1001 19:20:58.972871   31154 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 19:20:58.978194   31154 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 19:20:58.978260   31154 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 19:20:58.994663   31154 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 19:20:58.994684   31154 start.go:495] detecting cgroup driver to use...
	I1001 19:20:58.994738   31154 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 19:20:59.011009   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 19:20:59.025521   31154 docker.go:217] disabling cri-docker service (if available) ...
	I1001 19:20:59.025608   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 19:20:59.039348   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 19:20:59.052807   31154 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 19:20:59.169289   31154 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 19:20:59.334757   31154 docker.go:233] disabling docker service ...
	I1001 19:20:59.334834   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 19:20:59.348035   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 19:20:59.360660   31154 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 19:20:59.486509   31154 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 19:20:59.604588   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 19:20:59.617998   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 19:20:59.635554   31154 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 19:20:59.635626   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.645574   31154 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 19:20:59.645648   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.655487   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.665223   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.674970   31154 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 19:20:59.684872   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.694696   31154 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.710618   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.721089   31154 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 19:20:59.731283   31154 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 19:20:59.731352   31154 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 19:20:59.746274   31154 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 19:20:59.756184   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:20:59.870307   31154 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 19:20:59.956939   31154 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 19:20:59.957022   31154 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 19:20:59.961766   31154 start.go:563] Will wait 60s for crictl version
	I1001 19:20:59.961831   31154 ssh_runner.go:195] Run: which crictl
	I1001 19:20:59.965776   31154 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 19:21:00.010361   31154 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 19:21:00.010446   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:21:00.041083   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:21:00.075668   31154 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 19:21:00.077105   31154 out.go:177]   - env NO_PROXY=192.168.39.14
	I1001 19:21:00.078374   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetIP
	I1001 19:21:00.081375   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:21:00.081679   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:21:00.081711   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:21:00.081983   31154 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 19:21:00.086306   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:21:00.099180   31154 mustload.go:65] Loading cluster: ha-193737
	I1001 19:21:00.099450   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:21:00.099790   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:21:00.099833   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:21:00.115527   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43263
	I1001 19:21:00.116081   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:21:00.116546   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:21:00.116565   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:21:00.116887   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:21:00.117121   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:21:00.118679   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:21:00.118968   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:21:00.119005   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:21:00.133660   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37793
	I1001 19:21:00.134171   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:21:00.134638   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:21:00.134657   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:21:00.134945   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:21:00.135112   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:21:00.135251   31154 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737 for IP: 192.168.39.27
	I1001 19:21:00.135263   31154 certs.go:194] generating shared ca certs ...
	I1001 19:21:00.135281   31154 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:21:00.135407   31154 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 19:21:00.135448   31154 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 19:21:00.135454   31154 certs.go:256] generating profile certs ...
	I1001 19:21:00.135523   31154 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key
	I1001 19:21:00.135547   31154 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.b6f75b80
	I1001 19:21:00.135561   31154 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.b6f75b80 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.14 192.168.39.27 192.168.39.254]
	I1001 19:21:00.686434   31154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.b6f75b80 ...
	I1001 19:21:00.686467   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.b6f75b80: {Name:mkeb01bd9448160d7d89858bc8ed1c53818e2061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:21:00.686650   31154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.b6f75b80 ...
	I1001 19:21:00.686663   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.b6f75b80: {Name:mk3a8c2ce4c29185d261167caf7207467c082c1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:21:00.686733   31154 certs.go:381] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.b6f75b80 -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt
	I1001 19:21:00.686905   31154 certs.go:385] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.b6f75b80 -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key
	I1001 19:21:00.687041   31154 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key
	I1001 19:21:00.687055   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 19:21:00.687068   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 19:21:00.687080   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 19:21:00.687093   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 19:21:00.687105   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 19:21:00.687117   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 19:21:00.687128   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 19:21:00.687140   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 19:21:00.687188   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 19:21:00.687218   31154 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 19:21:00.687227   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 19:21:00.687249   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 19:21:00.687269   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 19:21:00.687290   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 19:21:00.687321   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:21:00.687345   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:21:00.687358   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem -> /usr/share/ca-certificates/18430.pem
	I1001 19:21:00.687370   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /usr/share/ca-certificates/184302.pem
	I1001 19:21:00.687398   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:21:00.690221   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:21:00.690721   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:21:00.690750   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:21:00.690891   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:21:00.691103   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:21:00.691297   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:21:00.691469   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:21:00.764849   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1001 19:21:00.770067   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1001 19:21:00.781099   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1001 19:21:00.785191   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1001 19:21:00.796213   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1001 19:21:00.800405   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1001 19:21:00.810899   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1001 19:21:00.815556   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1001 19:21:00.825792   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1001 19:21:00.830049   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1001 19:21:00.841022   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1001 19:21:00.845622   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1001 19:21:00.857011   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 19:21:00.881387   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 19:21:00.905420   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 19:21:00.930584   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 19:21:00.957479   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1001 19:21:00.982115   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 19:21:01.005996   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 19:21:01.031948   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 19:21:01.059129   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 19:21:01.084143   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 19:21:01.109909   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 19:21:01.133720   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1001 19:21:01.150500   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1001 19:21:01.168599   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1001 19:21:01.185368   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1001 19:21:01.202279   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1001 19:21:01.218930   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1001 19:21:01.235286   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1001 19:21:01.251963   31154 ssh_runner.go:195] Run: openssl version
	I1001 19:21:01.257542   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 19:21:01.268254   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:21:01.272732   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:21:01.272802   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:21:01.278777   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 19:21:01.290880   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 19:21:01.301840   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 19:21:01.306397   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 19:21:01.306469   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 19:21:01.312313   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 19:21:01.322717   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 19:21:01.333015   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 19:21:01.337340   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 19:21:01.337400   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 19:21:01.343033   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 19:21:01.354495   31154 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 19:21:01.358223   31154 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 19:21:01.358275   31154 kubeadm.go:934] updating node {m02 192.168.39.27 8443 v1.31.1 crio true true} ...
	I1001 19:21:01.358349   31154 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-193737-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 19:21:01.358373   31154 kube-vip.go:115] generating kube-vip config ...
	I1001 19:21:01.358405   31154 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 19:21:01.374873   31154 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 19:21:01.374943   31154 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1001 19:21:01.374989   31154 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 19:21:01.384444   31154 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1001 19:21:01.384518   31154 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1001 19:21:01.394161   31154 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1001 19:21:01.394190   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 19:21:01.394191   31154 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1001 19:21:01.394256   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 19:21:01.394189   31154 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1001 19:21:01.398439   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1001 19:21:01.398487   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1001 19:21:02.673266   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 19:21:02.673366   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 19:21:02.678383   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1001 19:21:02.678421   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1001 19:21:02.683681   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 19:21:02.723149   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 19:21:02.723251   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 19:21:02.737865   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1001 19:21:02.737908   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1001 19:21:03.230970   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1001 19:21:03.240943   31154 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1001 19:21:03.257655   31154 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 19:21:03.274741   31154 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1001 19:21:03.291537   31154 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 19:21:03.295338   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:21:03.307165   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:21:03.463069   31154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:21:03.480147   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:21:03.480689   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:21:03.480744   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:21:03.495841   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39579
	I1001 19:21:03.496320   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:21:03.496880   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:21:03.496904   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:21:03.497248   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:21:03.497421   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:21:03.497546   31154 start.go:317] joinCluster: &{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:21:03.497680   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1001 19:21:03.497702   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:21:03.500751   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:21:03.501276   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:21:03.501306   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:21:03.501495   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:21:03.501701   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:21:03.501893   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:21:03.502064   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:21:03.648333   31154 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:21:03.648405   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n692vg.wpdyj1cg443tmqgp --discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-193737-m02 --control-plane --apiserver-advertise-address=192.168.39.27 --apiserver-bind-port=8443"
	I1001 19:21:25.467048   31154 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n692vg.wpdyj1cg443tmqgp --discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-193737-m02 --control-plane --apiserver-advertise-address=192.168.39.27 --apiserver-bind-port=8443": (21.818614216s)
	I1001 19:21:25.467085   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1001 19:21:26.061914   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-193737-m02 minikube.k8s.io/updated_at=2024_10_01T19_21_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=ha-193737 minikube.k8s.io/primary=false
	I1001 19:21:26.203974   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-193737-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1001 19:21:26.315094   31154 start.go:319] duration metric: took 22.817544624s to joinCluster
	I1001 19:21:26.315164   31154 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:21:26.315617   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:21:26.316452   31154 out.go:177] * Verifying Kubernetes components...
	I1001 19:21:26.317646   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:21:26.611377   31154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:21:26.640565   31154 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:21:26.640891   31154 kapi.go:59] client config for ha-193737: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key", CAFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1001 19:21:26.640968   31154 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.14:8443
	I1001 19:21:26.641227   31154 node_ready.go:35] waiting up to 6m0s for node "ha-193737-m02" to be "Ready" ...
	I1001 19:21:26.641356   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:26.641366   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:26.641375   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:26.641380   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:26.653154   31154 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1001 19:21:27.141735   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:27.141756   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:27.141764   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:27.141768   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:27.148495   31154 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1001 19:21:27.641626   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:27.641661   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:27.641672   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:27.641677   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:27.646178   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:28.142172   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:28.142200   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:28.142210   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:28.142216   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:28.146315   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:28.641888   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:28.641917   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:28.641931   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:28.641940   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:28.645578   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:28.646211   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:29.141557   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:29.141582   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:29.141592   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:29.141597   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:29.146956   31154 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1001 19:21:29.641796   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:29.641817   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:29.641824   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:29.641829   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:29.645155   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:30.142079   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:30.142103   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:30.142114   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:30.142119   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:30.145277   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:30.642189   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:30.642209   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:30.642217   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:30.642220   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:30.646863   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:30.647494   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:31.141763   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:31.141784   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:31.141796   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:31.141801   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:31.145813   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:31.641815   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:31.641836   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:31.641847   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:31.641853   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:31.645200   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:32.141448   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:32.141473   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:32.141486   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:32.141493   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:32.145295   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:32.641622   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:32.641643   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:32.641649   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:32.641653   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:32.645174   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:33.141797   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:33.141818   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:33.141826   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:33.141830   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:33.145091   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:33.145688   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:33.641422   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:33.641445   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:33.641454   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:33.641464   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:33.644675   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:34.141560   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:34.141589   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:34.141601   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:34.141607   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:34.145278   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:34.641659   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:34.641678   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:34.641686   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:34.641691   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:34.644811   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:35.142049   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:35.142075   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:35.142083   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:35.142087   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:35.145002   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:35.641531   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:35.641559   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:35.641573   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:35.641586   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:35.644829   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:35.645348   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:36.141635   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:36.141655   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:36.141663   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:36.141668   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:36.144536   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:36.642098   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:36.642119   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:36.642127   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:36.642130   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:36.645313   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:37.142420   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:37.142468   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:37.142477   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:37.142481   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:37.145780   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:37.641627   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:37.641647   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:37.641655   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:37.641659   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:37.644484   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:38.142220   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:38.142244   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:38.142255   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:38.142262   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:38.145466   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:38.146172   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:38.641992   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:38.642015   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:38.642024   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:38.642028   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:38.644515   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:39.141559   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:39.141585   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:39.141595   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:39.141601   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:39.145034   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:39.641804   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:39.641838   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:39.641845   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:39.641850   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:39.646296   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:40.142227   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:40.142248   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:40.142256   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:40.142260   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:40.145591   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:40.642234   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:40.642258   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:40.642267   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:40.642271   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:40.645384   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:40.646037   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:41.142410   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:41.142429   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:41.142437   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:41.142441   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:41.145729   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:41.642146   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:41.642167   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:41.642174   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:41.642178   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:41.645647   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:42.141537   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:42.141559   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:42.141569   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:42.141575   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:42.144817   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:42.642106   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:42.642127   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:42.642136   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:42.642141   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:42.645934   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:42.646419   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:43.141441   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:43.141464   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:43.141472   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:43.141476   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:43.144793   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:43.642316   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:43.642337   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:43.642345   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:43.642351   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:43.646007   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:44.142085   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:44.142106   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:44.142114   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:44.142117   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:44.145431   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:44.642346   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:44.642368   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:44.642376   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:44.642379   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:44.645860   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:45.142289   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:45.142312   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.142323   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.142330   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.145780   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:45.146379   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:45.641699   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:45.641725   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.641733   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.641736   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.645813   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:45.646591   31154 node_ready.go:49] node "ha-193737-m02" has status "Ready":"True"
	I1001 19:21:45.646618   31154 node_ready.go:38] duration metric: took 19.005351721s for node "ha-193737-m02" to be "Ready" ...
	I1001 19:21:45.646627   31154 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 19:21:45.646691   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:21:45.646700   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.646707   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.646713   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.650655   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:45.657881   31154 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.657971   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hd5hv
	I1001 19:21:45.657980   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.657988   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.657993   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.660900   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.661620   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:45.661639   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.661649   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.661657   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.665733   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:45.666386   31154 pod_ready.go:93] pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:45.666409   31154 pod_ready.go:82] duration metric: took 8.499445ms for pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.666421   31154 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.666492   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-v2wsx
	I1001 19:21:45.666502   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.666512   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.666518   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.669133   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.669889   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:45.669907   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.669918   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.669923   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.672275   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.672755   31154 pod_ready.go:93] pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:45.672774   31154 pod_ready.go:82] duration metric: took 6.344856ms for pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.672786   31154 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.672846   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737
	I1001 19:21:45.672857   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.672867   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.672872   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.675287   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.675893   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:45.675911   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.675922   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.675930   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.678241   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.678741   31154 pod_ready.go:93] pod "etcd-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:45.678763   31154 pod_ready.go:82] duration metric: took 5.967949ms for pod "etcd-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.678772   31154 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.678833   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737-m02
	I1001 19:21:45.678850   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.678858   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.678871   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.681191   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.681800   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:45.681815   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.681825   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.681830   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.683889   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.684431   31154 pod_ready.go:93] pod "etcd-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:45.684453   31154 pod_ready.go:82] duration metric: took 5.673081ms for pod "etcd-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.684473   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.841835   31154 request.go:632] Waited for 157.291258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737
	I1001 19:21:45.841900   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737
	I1001 19:21:45.841906   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.841913   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.841919   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.845357   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.042508   31154 request.go:632] Waited for 196.405333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:46.042588   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:46.042599   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:46.042611   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:46.042619   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:46.046254   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.046866   31154 pod_ready.go:93] pod "kube-apiserver-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:46.046884   31154 pod_ready.go:82] duration metric: took 362.399581ms for pod "kube-apiserver-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:46.046893   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:46.242039   31154 request.go:632] Waited for 195.063872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m02
	I1001 19:21:46.242144   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m02
	I1001 19:21:46.242157   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:46.242168   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:46.242174   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:46.246032   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.441916   31154 request.go:632] Waited for 195.330252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:46.441997   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:46.442003   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:46.442011   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:46.442014   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:46.445457   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.445994   31154 pod_ready.go:93] pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:46.446014   31154 pod_ready.go:82] duration metric: took 399.112887ms for pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:46.446031   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:46.642080   31154 request.go:632] Waited for 195.96912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737
	I1001 19:21:46.642133   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737
	I1001 19:21:46.642138   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:46.642146   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:46.642149   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:46.645872   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.842116   31154 request.go:632] Waited for 195.42226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:46.842206   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:46.842215   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:46.842223   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:46.842231   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:46.845287   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.845743   31154 pod_ready.go:93] pod "kube-controller-manager-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:46.845760   31154 pod_ready.go:82] duration metric: took 399.720077ms for pod "kube-controller-manager-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:46.845770   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:47.042048   31154 request.go:632] Waited for 196.194982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m02
	I1001 19:21:47.042116   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m02
	I1001 19:21:47.042122   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:47.042129   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:47.042134   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:47.045174   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:47.242154   31154 request.go:632] Waited for 196.389668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:47.242211   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:47.242216   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:47.242224   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:47.242228   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:47.246078   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:47.246437   31154 pod_ready.go:93] pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:47.246460   31154 pod_ready.go:82] duration metric: took 400.684034ms for pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:47.246470   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4294m" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:47.442023   31154 request.go:632] Waited for 195.496186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4294m
	I1001 19:21:47.442102   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4294m
	I1001 19:21:47.442107   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:47.442115   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:47.442119   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:47.446724   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:47.642099   31154 request.go:632] Waited for 194.348221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:47.642163   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:47.642174   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:47.642181   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:47.642186   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:47.645393   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:47.645928   31154 pod_ready.go:93] pod "kube-proxy-4294m" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:47.645950   31154 pod_ready.go:82] duration metric: took 399.472712ms for pod "kube-proxy-4294m" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:47.645961   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zpsll" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:47.842563   31154 request.go:632] Waited for 196.53672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zpsll
	I1001 19:21:47.842654   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zpsll
	I1001 19:21:47.842670   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:47.842677   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:47.842685   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:47.846435   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:48.042435   31154 request.go:632] Waited for 195.268783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:48.042516   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:48.042523   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.042531   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.042535   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.045444   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:48.045979   31154 pod_ready.go:93] pod "kube-proxy-zpsll" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:48.045999   31154 pod_ready.go:82] duration metric: took 400.030874ms for pod "kube-proxy-zpsll" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:48.046008   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:48.242127   31154 request.go:632] Waited for 196.061352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737
	I1001 19:21:48.242188   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737
	I1001 19:21:48.242194   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.242200   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.242205   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.245701   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:48.442714   31154 request.go:632] Waited for 196.392016ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:48.442788   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:48.442796   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.442806   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.442811   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.445488   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:48.445923   31154 pod_ready.go:93] pod "kube-scheduler-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:48.445941   31154 pod_ready.go:82] duration metric: took 399.927294ms for pod "kube-scheduler-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:48.445950   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:48.642436   31154 request.go:632] Waited for 196.414559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m02
	I1001 19:21:48.642504   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m02
	I1001 19:21:48.642511   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.642520   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.642528   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.645886   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:48.841792   31154 request.go:632] Waited for 195.303821ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:48.841877   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:48.841893   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.841907   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.841917   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.845141   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:48.845610   31154 pod_ready.go:93] pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:48.845627   31154 pod_ready.go:82] duration metric: took 399.670346ms for pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:48.845638   31154 pod_ready.go:39] duration metric: took 3.199000029s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 19:21:48.845650   31154 api_server.go:52] waiting for apiserver process to appear ...
	I1001 19:21:48.845706   31154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 19:21:48.860102   31154 api_server.go:72] duration metric: took 22.544907394s to wait for apiserver process to appear ...
	I1001 19:21:48.860136   31154 api_server.go:88] waiting for apiserver healthz status ...
	I1001 19:21:48.860157   31154 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I1001 19:21:48.864372   31154 api_server.go:279] https://192.168.39.14:8443/healthz returned 200:
	ok
	I1001 19:21:48.864454   31154 round_trippers.go:463] GET https://192.168.39.14:8443/version
	I1001 19:21:48.864464   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.864471   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.864475   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.865481   31154 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1001 19:21:48.865563   31154 api_server.go:141] control plane version: v1.31.1
	I1001 19:21:48.865578   31154 api_server.go:131] duration metric: took 5.43668ms to wait for apiserver health ...
	I1001 19:21:48.865588   31154 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 19:21:49.042005   31154 request.go:632] Waited for 176.346586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:21:49.042080   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:21:49.042086   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:49.042096   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:49.042103   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:49.046797   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:49.050697   31154 system_pods.go:59] 17 kube-system pods found
	I1001 19:21:49.050730   31154 system_pods.go:61] "coredns-7c65d6cfc9-hd5hv" [31f0afff-5571-46d6-888f-8982c71ba191] Running
	I1001 19:21:49.050741   31154 system_pods.go:61] "coredns-7c65d6cfc9-v2wsx" [8e3dd318-5017-4ada-bf2f-61b640ee2030] Running
	I1001 19:21:49.050745   31154 system_pods.go:61] "etcd-ha-193737" [99c3674d-50d6-4160-89dd-afb3c2e71039] Running
	I1001 19:21:49.050749   31154 system_pods.go:61] "etcd-ha-193737-m02" [a541b9c8-c10e-45b4-ac9c-d16e8bf659b0] Running
	I1001 19:21:49.050752   31154 system_pods.go:61] "kindnet-drdlr" [13177890-a0eb-47ff-8fe2-585992810e47] Running
	I1001 19:21:49.050755   31154 system_pods.go:61] "kindnet-wnr6g" [89e11419-0c5c-486e-bdbf-eaf6fab1e62c] Running
	I1001 19:21:49.050758   31154 system_pods.go:61] "kube-apiserver-ha-193737" [432bf6a6-4607-4f55-a026-d910075d3145] Running
	I1001 19:21:49.050761   31154 system_pods.go:61] "kube-apiserver-ha-193737-m02" [379a24af-2148-4870-9f4b-25ad48943445] Running
	I1001 19:21:49.050764   31154 system_pods.go:61] "kube-controller-manager-ha-193737" [95fb44db-8cb3-4db9-8a84-ab82e15760de] Running
	I1001 19:21:49.050768   31154 system_pods.go:61] "kube-controller-manager-ha-193737-m02" [5b0821e0-673a-440c-8ca1-fefbfe913b94] Running
	I1001 19:21:49.050771   31154 system_pods.go:61] "kube-proxy-4294m" [f454ca0f-d662-4dfd-ab77-ebccaf3a6b12] Running
	I1001 19:21:49.050773   31154 system_pods.go:61] "kube-proxy-zpsll" [c18fec3c-2880-4860-b220-a44d5e523bed] Running
	I1001 19:21:49.050777   31154 system_pods.go:61] "kube-scheduler-ha-193737" [46ca2b37-6145-4111-9088-bcf51307be3a] Running
	I1001 19:21:49.050780   31154 system_pods.go:61] "kube-scheduler-ha-193737-m02" [72be6cd5-d226-46d6-b675-99014e544dfb] Running
	I1001 19:21:49.050783   31154 system_pods.go:61] "kube-vip-ha-193737" [cbe8e6a4-08f3-4db3-af4d-810a5592597c] Running
	I1001 19:21:49.050790   31154 system_pods.go:61] "kube-vip-ha-193737-m02" [da7dfa59-1b27-45ce-9e61-ad8da40ec548] Running
	I1001 19:21:49.050793   31154 system_pods.go:61] "storage-provisioner" [d5b587a6-418b-47e5-9bf7-3fb6fa5e3372] Running
	I1001 19:21:49.050802   31154 system_pods.go:74] duration metric: took 185.209049ms to wait for pod list to return data ...
	I1001 19:21:49.050812   31154 default_sa.go:34] waiting for default service account to be created ...
	I1001 19:21:49.242249   31154 request.go:632] Waited for 191.355869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
	I1001 19:21:49.242329   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
	I1001 19:21:49.242336   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:49.242346   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:49.242365   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:49.246320   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:49.246557   31154 default_sa.go:45] found service account: "default"
	I1001 19:21:49.246575   31154 default_sa.go:55] duration metric: took 195.756912ms for default service account to be created ...
	I1001 19:21:49.246582   31154 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 19:21:49.442016   31154 request.go:632] Waited for 195.370336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:21:49.442076   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:21:49.442083   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:49.442092   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:49.442101   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:49.446494   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:49.452730   31154 system_pods.go:86] 17 kube-system pods found
	I1001 19:21:49.452758   31154 system_pods.go:89] "coredns-7c65d6cfc9-hd5hv" [31f0afff-5571-46d6-888f-8982c71ba191] Running
	I1001 19:21:49.452764   31154 system_pods.go:89] "coredns-7c65d6cfc9-v2wsx" [8e3dd318-5017-4ada-bf2f-61b640ee2030] Running
	I1001 19:21:49.452768   31154 system_pods.go:89] "etcd-ha-193737" [99c3674d-50d6-4160-89dd-afb3c2e71039] Running
	I1001 19:21:49.452772   31154 system_pods.go:89] "etcd-ha-193737-m02" [a541b9c8-c10e-45b4-ac9c-d16e8bf659b0] Running
	I1001 19:21:49.452775   31154 system_pods.go:89] "kindnet-drdlr" [13177890-a0eb-47ff-8fe2-585992810e47] Running
	I1001 19:21:49.452778   31154 system_pods.go:89] "kindnet-wnr6g" [89e11419-0c5c-486e-bdbf-eaf6fab1e62c] Running
	I1001 19:21:49.452781   31154 system_pods.go:89] "kube-apiserver-ha-193737" [432bf6a6-4607-4f55-a026-d910075d3145] Running
	I1001 19:21:49.452784   31154 system_pods.go:89] "kube-apiserver-ha-193737-m02" [379a24af-2148-4870-9f4b-25ad48943445] Running
	I1001 19:21:49.452788   31154 system_pods.go:89] "kube-controller-manager-ha-193737" [95fb44db-8cb3-4db9-8a84-ab82e15760de] Running
	I1001 19:21:49.452791   31154 system_pods.go:89] "kube-controller-manager-ha-193737-m02" [5b0821e0-673a-440c-8ca1-fefbfe913b94] Running
	I1001 19:21:49.452793   31154 system_pods.go:89] "kube-proxy-4294m" [f454ca0f-d662-4dfd-ab77-ebccaf3a6b12] Running
	I1001 19:21:49.452803   31154 system_pods.go:89] "kube-proxy-zpsll" [c18fec3c-2880-4860-b220-a44d5e523bed] Running
	I1001 19:21:49.452806   31154 system_pods.go:89] "kube-scheduler-ha-193737" [46ca2b37-6145-4111-9088-bcf51307be3a] Running
	I1001 19:21:49.452809   31154 system_pods.go:89] "kube-scheduler-ha-193737-m02" [72be6cd5-d226-46d6-b675-99014e544dfb] Running
	I1001 19:21:49.452812   31154 system_pods.go:89] "kube-vip-ha-193737" [cbe8e6a4-08f3-4db3-af4d-810a5592597c] Running
	I1001 19:21:49.452815   31154 system_pods.go:89] "kube-vip-ha-193737-m02" [da7dfa59-1b27-45ce-9e61-ad8da40ec548] Running
	I1001 19:21:49.452817   31154 system_pods.go:89] "storage-provisioner" [d5b587a6-418b-47e5-9bf7-3fb6fa5e3372] Running
	I1001 19:21:49.452823   31154 system_pods.go:126] duration metric: took 206.236353ms to wait for k8s-apps to be running ...
	I1001 19:21:49.452833   31154 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 19:21:49.452882   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 19:21:49.467775   31154 system_svc.go:56] duration metric: took 14.93254ms WaitForService to wait for kubelet
	I1001 19:21:49.467809   31154 kubeadm.go:582] duration metric: took 23.152617942s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 19:21:49.467833   31154 node_conditions.go:102] verifying NodePressure condition ...
	I1001 19:21:49.642303   31154 request.go:632] Waited for 174.372716ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes
	I1001 19:21:49.642352   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes
	I1001 19:21:49.642356   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:49.642364   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:49.642369   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:49.646440   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:49.647131   31154 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 19:21:49.647176   31154 node_conditions.go:123] node cpu capacity is 2
	I1001 19:21:49.647192   31154 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 19:21:49.647199   31154 node_conditions.go:123] node cpu capacity is 2
	I1001 19:21:49.647206   31154 node_conditions.go:105] duration metric: took 179.366973ms to run NodePressure ...
	I1001 19:21:49.647235   31154 start.go:241] waiting for startup goroutines ...
	I1001 19:21:49.647267   31154 start.go:255] writing updated cluster config ...
	I1001 19:21:49.649327   31154 out.go:201] 
	I1001 19:21:49.650621   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:21:49.650719   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:21:49.652065   31154 out.go:177] * Starting "ha-193737-m03" control-plane node in "ha-193737" cluster
	I1001 19:21:49.653048   31154 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:21:49.653076   31154 cache.go:56] Caching tarball of preloaded images
	I1001 19:21:49.653193   31154 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 19:21:49.653209   31154 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 19:21:49.653361   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:21:49.653640   31154 start.go:360] acquireMachinesLock for ha-193737-m03: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 19:21:49.653690   31154 start.go:364] duration metric: took 31.444µs to acquireMachinesLock for "ha-193737-m03"
	I1001 19:21:49.653709   31154 start.go:93] Provisioning new machine with config: &{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:21:49.653808   31154 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1001 19:21:49.655218   31154 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 19:21:49.655330   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:21:49.655375   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:21:49.671457   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36635
	I1001 19:21:49.672015   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:21:49.672579   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:21:49.672608   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:21:49.673005   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:21:49.673189   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetMachineName
	I1001 19:21:49.673372   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:21:49.673585   31154 start.go:159] libmachine.API.Create for "ha-193737" (driver="kvm2")
	I1001 19:21:49.673614   31154 client.go:168] LocalClient.Create starting
	I1001 19:21:49.673650   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem
	I1001 19:21:49.673691   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:21:49.673722   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:21:49.673797   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem
	I1001 19:21:49.673824   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:21:49.673838   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:21:49.673873   31154 main.go:141] libmachine: Running pre-create checks...
	I1001 19:21:49.673885   31154 main.go:141] libmachine: (ha-193737-m03) Calling .PreCreateCheck
	I1001 19:21:49.674030   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetConfigRaw
	I1001 19:21:49.674391   31154 main.go:141] libmachine: Creating machine...
	I1001 19:21:49.674405   31154 main.go:141] libmachine: (ha-193737-m03) Calling .Create
	I1001 19:21:49.674509   31154 main.go:141] libmachine: (ha-193737-m03) Creating KVM machine...
	I1001 19:21:49.675629   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found existing default KVM network
	I1001 19:21:49.675774   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found existing private KVM network mk-ha-193737
	I1001 19:21:49.675890   31154 main.go:141] libmachine: (ha-193737-m03) Setting up store path in /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03 ...
	I1001 19:21:49.675911   31154 main.go:141] libmachine: (ha-193737-m03) Building disk image from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 19:21:49.675957   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:49.675868   32386 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:21:49.676067   31154 main.go:141] libmachine: (ha-193737-m03) Downloading /home/jenkins/minikube-integration/19736-11198/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 19:21:49.919887   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:49.919775   32386 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa...
	I1001 19:21:50.197974   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:50.197797   32386 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/ha-193737-m03.rawdisk...
	I1001 19:21:50.198009   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Writing magic tar header
	I1001 19:21:50.198030   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Writing SSH key tar header
	I1001 19:21:50.198044   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03 (perms=drwx------)
	I1001 19:21:50.198058   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:50.197915   32386 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03 ...
	I1001 19:21:50.198069   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines (perms=drwxr-xr-x)
	I1001 19:21:50.198088   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube (perms=drwxr-xr-x)
	I1001 19:21:50.198099   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198 (perms=drwxrwxr-x)
	I1001 19:21:50.198109   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 19:21:50.198128   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 19:21:50.198141   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03
	I1001 19:21:50.198152   31154 main.go:141] libmachine: (ha-193737-m03) Creating domain...
	I1001 19:21:50.198180   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines
	I1001 19:21:50.198190   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:21:50.198206   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198
	I1001 19:21:50.198215   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 19:21:50.198224   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins
	I1001 19:21:50.198235   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home
	I1001 19:21:50.198248   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Skipping /home - not owner
	I1001 19:21:50.199136   31154 main.go:141] libmachine: (ha-193737-m03) define libvirt domain using xml: 
	I1001 19:21:50.199163   31154 main.go:141] libmachine: (ha-193737-m03) <domain type='kvm'>
	I1001 19:21:50.199174   31154 main.go:141] libmachine: (ha-193737-m03)   <name>ha-193737-m03</name>
	I1001 19:21:50.199182   31154 main.go:141] libmachine: (ha-193737-m03)   <memory unit='MiB'>2200</memory>
	I1001 19:21:50.199192   31154 main.go:141] libmachine: (ha-193737-m03)   <vcpu>2</vcpu>
	I1001 19:21:50.199198   31154 main.go:141] libmachine: (ha-193737-m03)   <features>
	I1001 19:21:50.199207   31154 main.go:141] libmachine: (ha-193737-m03)     <acpi/>
	I1001 19:21:50.199216   31154 main.go:141] libmachine: (ha-193737-m03)     <apic/>
	I1001 19:21:50.199226   31154 main.go:141] libmachine: (ha-193737-m03)     <pae/>
	I1001 19:21:50.199234   31154 main.go:141] libmachine: (ha-193737-m03)     
	I1001 19:21:50.199241   31154 main.go:141] libmachine: (ha-193737-m03)   </features>
	I1001 19:21:50.199248   31154 main.go:141] libmachine: (ha-193737-m03)   <cpu mode='host-passthrough'>
	I1001 19:21:50.199270   31154 main.go:141] libmachine: (ha-193737-m03)   
	I1001 19:21:50.199286   31154 main.go:141] libmachine: (ha-193737-m03)   </cpu>
	I1001 19:21:50.199295   31154 main.go:141] libmachine: (ha-193737-m03)   <os>
	I1001 19:21:50.199303   31154 main.go:141] libmachine: (ha-193737-m03)     <type>hvm</type>
	I1001 19:21:50.199315   31154 main.go:141] libmachine: (ha-193737-m03)     <boot dev='cdrom'/>
	I1001 19:21:50.199323   31154 main.go:141] libmachine: (ha-193737-m03)     <boot dev='hd'/>
	I1001 19:21:50.199334   31154 main.go:141] libmachine: (ha-193737-m03)     <bootmenu enable='no'/>
	I1001 19:21:50.199343   31154 main.go:141] libmachine: (ha-193737-m03)   </os>
	I1001 19:21:50.199352   31154 main.go:141] libmachine: (ha-193737-m03)   <devices>
	I1001 19:21:50.199367   31154 main.go:141] libmachine: (ha-193737-m03)     <disk type='file' device='cdrom'>
	I1001 19:21:50.199383   31154 main.go:141] libmachine: (ha-193737-m03)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/boot2docker.iso'/>
	I1001 19:21:50.199394   31154 main.go:141] libmachine: (ha-193737-m03)       <target dev='hdc' bus='scsi'/>
	I1001 19:21:50.199404   31154 main.go:141] libmachine: (ha-193737-m03)       <readonly/>
	I1001 19:21:50.199413   31154 main.go:141] libmachine: (ha-193737-m03)     </disk>
	I1001 19:21:50.199425   31154 main.go:141] libmachine: (ha-193737-m03)     <disk type='file' device='disk'>
	I1001 19:21:50.199441   31154 main.go:141] libmachine: (ha-193737-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 19:21:50.199458   31154 main.go:141] libmachine: (ha-193737-m03)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/ha-193737-m03.rawdisk'/>
	I1001 19:21:50.199468   31154 main.go:141] libmachine: (ha-193737-m03)       <target dev='hda' bus='virtio'/>
	I1001 19:21:50.199477   31154 main.go:141] libmachine: (ha-193737-m03)     </disk>
	I1001 19:21:50.199486   31154 main.go:141] libmachine: (ha-193737-m03)     <interface type='network'>
	I1001 19:21:50.199495   31154 main.go:141] libmachine: (ha-193737-m03)       <source network='mk-ha-193737'/>
	I1001 19:21:50.199503   31154 main.go:141] libmachine: (ha-193737-m03)       <model type='virtio'/>
	I1001 19:21:50.199531   31154 main.go:141] libmachine: (ha-193737-m03)     </interface>
	I1001 19:21:50.199562   31154 main.go:141] libmachine: (ha-193737-m03)     <interface type='network'>
	I1001 19:21:50.199576   31154 main.go:141] libmachine: (ha-193737-m03)       <source network='default'/>
	I1001 19:21:50.199588   31154 main.go:141] libmachine: (ha-193737-m03)       <model type='virtio'/>
	I1001 19:21:50.199599   31154 main.go:141] libmachine: (ha-193737-m03)     </interface>
	I1001 19:21:50.199608   31154 main.go:141] libmachine: (ha-193737-m03)     <serial type='pty'>
	I1001 19:21:50.199619   31154 main.go:141] libmachine: (ha-193737-m03)       <target port='0'/>
	I1001 19:21:50.199627   31154 main.go:141] libmachine: (ha-193737-m03)     </serial>
	I1001 19:21:50.199662   31154 main.go:141] libmachine: (ha-193737-m03)     <console type='pty'>
	I1001 19:21:50.199708   31154 main.go:141] libmachine: (ha-193737-m03)       <target type='serial' port='0'/>
	I1001 19:21:50.199726   31154 main.go:141] libmachine: (ha-193737-m03)     </console>
	I1001 19:21:50.199748   31154 main.go:141] libmachine: (ha-193737-m03)     <rng model='virtio'>
	I1001 19:21:50.199767   31154 main.go:141] libmachine: (ha-193737-m03)       <backend model='random'>/dev/random</backend>
	I1001 19:21:50.199780   31154 main.go:141] libmachine: (ha-193737-m03)     </rng>
	I1001 19:21:50.199794   31154 main.go:141] libmachine: (ha-193737-m03)     
	I1001 19:21:50.199803   31154 main.go:141] libmachine: (ha-193737-m03)     
	I1001 19:21:50.199814   31154 main.go:141] libmachine: (ha-193737-m03)   </devices>
	I1001 19:21:50.199820   31154 main.go:141] libmachine: (ha-193737-m03) </domain>
	I1001 19:21:50.199837   31154 main.go:141] libmachine: (ha-193737-m03) 
	I1001 19:21:50.206580   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:8b:a8:e7 in network default
	I1001 19:21:50.207376   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:50.207405   31154 main.go:141] libmachine: (ha-193737-m03) Ensuring networks are active...
	I1001 19:21:50.208168   31154 main.go:141] libmachine: (ha-193737-m03) Ensuring network default is active
	I1001 19:21:50.208498   31154 main.go:141] libmachine: (ha-193737-m03) Ensuring network mk-ha-193737 is active
	I1001 19:21:50.208873   31154 main.go:141] libmachine: (ha-193737-m03) Getting domain xml...
	I1001 19:21:50.209740   31154 main.go:141] libmachine: (ha-193737-m03) Creating domain...
	I1001 19:21:51.487699   31154 main.go:141] libmachine: (ha-193737-m03) Waiting to get IP...
	I1001 19:21:51.488558   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:51.488971   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:51.488988   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:51.488956   32386 retry.go:31] will retry after 292.057466ms: waiting for machine to come up
	I1001 19:21:51.782677   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:51.783145   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:51.783197   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:51.783106   32386 retry.go:31] will retry after 354.701551ms: waiting for machine to come up
	I1001 19:21:52.139803   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:52.140295   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:52.140322   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:52.140239   32386 retry.go:31] will retry after 363.996754ms: waiting for machine to come up
	I1001 19:21:52.505881   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:52.506427   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:52.506447   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:52.506386   32386 retry.go:31] will retry after 414.43192ms: waiting for machine to come up
	I1001 19:21:52.922204   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:52.922737   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:52.922766   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:52.922724   32386 retry.go:31] will retry after 579.407554ms: waiting for machine to come up
	I1001 19:21:53.503613   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:53.504058   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:53.504085   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:53.504000   32386 retry.go:31] will retry after 721.311664ms: waiting for machine to come up
	I1001 19:21:54.227110   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:54.227610   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:54.227655   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:54.227567   32386 retry.go:31] will retry after 1.130708111s: waiting for machine to come up
	I1001 19:21:55.360491   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:55.360900   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:55.360926   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:55.360870   32386 retry.go:31] will retry after 1.468803938s: waiting for machine to come up
	I1001 19:21:56.831225   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:56.831722   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:56.831750   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:56.831677   32386 retry.go:31] will retry after 1.742550848s: waiting for machine to come up
	I1001 19:21:58.576460   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:58.576859   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:58.576883   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:58.576823   32386 retry.go:31] will retry after 1.623668695s: waiting for machine to come up
	I1001 19:22:00.201759   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:00.202340   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:22:00.202361   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:22:00.202290   32386 retry.go:31] will retry after 1.997667198s: waiting for machine to come up
	I1001 19:22:02.201433   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:02.201901   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:22:02.201917   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:22:02.201868   32386 retry.go:31] will retry after 2.886327611s: waiting for machine to come up
	I1001 19:22:05.090402   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:05.090907   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:22:05.090933   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:22:05.090844   32386 retry.go:31] will retry after 3.87427099s: waiting for machine to come up
	I1001 19:22:08.966290   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:08.966719   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:22:08.966754   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:22:08.966674   32386 retry.go:31] will retry after 4.039315752s: waiting for machine to come up
	I1001 19:22:13.009358   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.009842   31154 main.go:141] libmachine: (ha-193737-m03) Found IP for machine: 192.168.39.101
	I1001 19:22:13.009868   31154 main.go:141] libmachine: (ha-193737-m03) Reserving static IP address...
	I1001 19:22:13.009881   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has current primary IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.010863   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find host DHCP lease matching {name: "ha-193737-m03", mac: "52:54:00:9e:b9:5c", ip: "192.168.39.101"} in network mk-ha-193737
	I1001 19:22:13.088968   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Getting to WaitForSSH function...
	I1001 19:22:13.088993   31154 main.go:141] libmachine: (ha-193737-m03) Reserved static IP address: 192.168.39.101
	I1001 19:22:13.089006   31154 main.go:141] libmachine: (ha-193737-m03) Waiting for SSH to be available...
	I1001 19:22:13.091870   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.092415   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.092449   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.092644   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Using SSH client type: external
	I1001 19:22:13.092667   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa (-rw-------)
	I1001 19:22:13.092694   31154 main.go:141] libmachine: (ha-193737-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 19:22:13.092712   31154 main.go:141] libmachine: (ha-193737-m03) DBG | About to run SSH command:
	I1001 19:22:13.092731   31154 main.go:141] libmachine: (ha-193737-m03) DBG | exit 0
	I1001 19:22:13.220534   31154 main.go:141] libmachine: (ha-193737-m03) DBG | SSH cmd err, output: <nil>: 
	I1001 19:22:13.220779   31154 main.go:141] libmachine: (ha-193737-m03) KVM machine creation complete!
	I1001 19:22:13.221074   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetConfigRaw
	I1001 19:22:13.221579   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:13.221804   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:13.221984   31154 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 19:22:13.222002   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetState
	I1001 19:22:13.223279   31154 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 19:22:13.223293   31154 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 19:22:13.223299   31154 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 19:22:13.223305   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.225923   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.226398   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.226416   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.226678   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:13.226887   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.227052   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.227186   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:13.227368   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:13.227559   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:13.227571   31154 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 19:22:13.332328   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:22:13.332352   31154 main.go:141] libmachine: Detecting the provisioner...
	I1001 19:22:13.332384   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.335169   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.335569   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.335603   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.335764   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:13.336042   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.336239   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.336386   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:13.336591   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:13.336771   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:13.336783   31154 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 19:22:13.445518   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 19:22:13.445586   31154 main.go:141] libmachine: found compatible host: buildroot
	I1001 19:22:13.445594   31154 main.go:141] libmachine: Provisioning with buildroot...
	I1001 19:22:13.445601   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetMachineName
	I1001 19:22:13.445821   31154 buildroot.go:166] provisioning hostname "ha-193737-m03"
	I1001 19:22:13.445847   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetMachineName
	I1001 19:22:13.446042   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.449433   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.449860   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.449897   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.450180   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:13.450368   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.450566   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.450713   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:13.450881   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:13.451039   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:13.451051   31154 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-193737-m03 && echo "ha-193737-m03" | sudo tee /etc/hostname
	I1001 19:22:13.572777   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-193737-m03
	
	I1001 19:22:13.572810   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.575494   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.575835   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.575859   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.576047   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:13.576235   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.576419   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.576571   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:13.576759   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:13.576956   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:13.576973   31154 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-193737-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-193737-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-193737-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 19:22:13.689983   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:22:13.690015   31154 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 19:22:13.690038   31154 buildroot.go:174] setting up certificates
	I1001 19:22:13.690050   31154 provision.go:84] configureAuth start
	I1001 19:22:13.690066   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetMachineName
	I1001 19:22:13.690369   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetIP
	I1001 19:22:13.693242   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.693664   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.693693   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.693840   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.696141   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.696495   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.696524   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.696638   31154 provision.go:143] copyHostCerts
	I1001 19:22:13.696676   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:22:13.696720   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 19:22:13.696731   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:22:13.696821   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 19:22:13.696919   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:22:13.696949   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 19:22:13.696960   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:22:13.697003   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 19:22:13.697067   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:22:13.697091   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 19:22:13.697100   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:22:13.697136   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 19:22:13.697206   31154 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.ha-193737-m03 san=[127.0.0.1 192.168.39.101 ha-193737-m03 localhost minikube]
	I1001 19:22:13.877573   31154 provision.go:177] copyRemoteCerts
	I1001 19:22:13.877625   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 19:22:13.877649   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.880678   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.880932   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.880970   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.881176   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:13.881406   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.881587   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:13.881804   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa Username:docker}
	I1001 19:22:13.962987   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 19:22:13.963068   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 19:22:13.986966   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 19:22:13.987070   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 19:22:14.013722   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 19:22:14.013794   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 19:22:14.037854   31154 provision.go:87] duration metric: took 347.788312ms to configureAuth
	I1001 19:22:14.037883   31154 buildroot.go:189] setting minikube options for container-runtime
	I1001 19:22:14.038135   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:22:14.038209   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:14.040944   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.041372   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.041401   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.041587   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:14.041771   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.041906   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.042003   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:14.042139   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:14.042328   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:14.042345   31154 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 19:22:14.262634   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 19:22:14.262673   31154 main.go:141] libmachine: Checking connection to Docker...
	I1001 19:22:14.262687   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetURL
	I1001 19:22:14.263998   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Using libvirt version 6000000
	I1001 19:22:14.266567   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.266926   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.266955   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.267154   31154 main.go:141] libmachine: Docker is up and running!
	I1001 19:22:14.267166   31154 main.go:141] libmachine: Reticulating splines...
	I1001 19:22:14.267173   31154 client.go:171] duration metric: took 24.593551771s to LocalClient.Create
	I1001 19:22:14.267196   31154 start.go:167] duration metric: took 24.593612564s to libmachine.API.Create "ha-193737"
	I1001 19:22:14.267205   31154 start.go:293] postStartSetup for "ha-193737-m03" (driver="kvm2")
	I1001 19:22:14.267214   31154 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 19:22:14.267240   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:14.267459   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 19:22:14.267484   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:14.269571   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.269977   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.270004   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.270121   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:14.270292   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.270427   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:14.270551   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa Username:docker}
	I1001 19:22:14.350988   31154 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 19:22:14.355823   31154 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 19:22:14.355848   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 19:22:14.355915   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 19:22:14.355986   31154 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 19:22:14.355994   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /etc/ssl/certs/184302.pem
	I1001 19:22:14.356070   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 19:22:14.366040   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:22:14.390055   31154 start.go:296] duration metric: took 122.835456ms for postStartSetup
	I1001 19:22:14.390108   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetConfigRaw
	I1001 19:22:14.390696   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetIP
	I1001 19:22:14.394065   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.394508   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.394536   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.394910   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:22:14.395150   31154 start.go:128] duration metric: took 24.741329773s to createHost
	I1001 19:22:14.395182   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:14.397581   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.397994   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.398017   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.398188   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:14.398403   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.398574   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.398727   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:14.398880   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:14.399094   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:14.399111   31154 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 19:22:14.505599   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727810534.482085733
	
	I1001 19:22:14.505628   31154 fix.go:216] guest clock: 1727810534.482085733
	I1001 19:22:14.505639   31154 fix.go:229] Guest: 2024-10-01 19:22:14.482085733 +0000 UTC Remote: 2024-10-01 19:22:14.395166889 +0000 UTC m=+146.623005707 (delta=86.918844ms)
	I1001 19:22:14.505658   31154 fix.go:200] guest clock delta is within tolerance: 86.918844ms
	I1001 19:22:14.505664   31154 start.go:83] releasing machines lock for "ha-193737-m03", held for 24.851963464s
	I1001 19:22:14.505684   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:14.505908   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetIP
	I1001 19:22:14.508696   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.509064   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.509086   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.511117   31154 out.go:177] * Found network options:
	I1001 19:22:14.512450   31154 out.go:177]   - NO_PROXY=192.168.39.14,192.168.39.27
	W1001 19:22:14.513603   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	W1001 19:22:14.513632   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 19:22:14.513653   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:14.514254   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:14.514460   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:14.514553   31154 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 19:22:14.514592   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	W1001 19:22:14.514627   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	W1001 19:22:14.514652   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 19:22:14.514726   31154 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 19:22:14.514748   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:14.517511   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.517716   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.517872   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.517897   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.518069   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:14.518071   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.518151   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.518298   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:14.518302   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.518474   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:14.518512   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.518613   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:14.518617   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa Username:docker}
	I1001 19:22:14.518740   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa Username:docker}
	I1001 19:22:14.749140   31154 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 19:22:14.755011   31154 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 19:22:14.755083   31154 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 19:22:14.772351   31154 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 19:22:14.772388   31154 start.go:495] detecting cgroup driver to use...
	I1001 19:22:14.772457   31154 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 19:22:14.789303   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 19:22:14.804840   31154 docker.go:217] disabling cri-docker service (if available) ...
	I1001 19:22:14.804906   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 19:22:14.819518   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 19:22:14.834095   31154 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 19:22:14.944783   31154 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 19:22:15.079717   31154 docker.go:233] disabling docker service ...
	I1001 19:22:15.079790   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 19:22:15.095162   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 19:22:15.107998   31154 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 19:22:15.243729   31154 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 19:22:15.377225   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 19:22:15.391343   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 19:22:15.411068   31154 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 19:22:15.411143   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.423227   31154 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 19:22:15.423294   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.434691   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.446242   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.457352   31154 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 19:22:15.469147   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.479924   31154 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.497221   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.507678   31154 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 19:22:15.517482   31154 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 19:22:15.517554   31154 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 19:22:15.532214   31154 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 19:22:15.541788   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:22:15.665094   31154 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 19:22:15.757492   31154 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 19:22:15.757569   31154 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 19:22:15.762004   31154 start.go:563] Will wait 60s for crictl version
	I1001 19:22:15.762063   31154 ssh_runner.go:195] Run: which crictl
	I1001 19:22:15.766039   31154 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 19:22:15.802516   31154 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 19:22:15.802600   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:22:15.831926   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:22:15.862187   31154 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 19:22:15.863552   31154 out.go:177]   - env NO_PROXY=192.168.39.14
	I1001 19:22:15.864903   31154 out.go:177]   - env NO_PROXY=192.168.39.14,192.168.39.27
	I1001 19:22:15.866357   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetIP
	I1001 19:22:15.868791   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:15.869113   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:15.869142   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:15.869293   31154 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 19:22:15.873237   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:22:15.885293   31154 mustload.go:65] Loading cluster: ha-193737
	I1001 19:22:15.885514   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:22:15.885795   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:22:15.885838   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:22:15.901055   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45767
	I1001 19:22:15.901633   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:22:15.902627   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:22:15.902658   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:22:15.903034   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:22:15.903198   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:22:15.905017   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:22:15.905429   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:22:15.905488   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:22:15.921741   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I1001 19:22:15.922203   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:22:15.923200   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:22:15.923220   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:22:15.923541   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:22:15.923744   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:22:15.923907   31154 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737 for IP: 192.168.39.101
	I1001 19:22:15.923919   31154 certs.go:194] generating shared ca certs ...
	I1001 19:22:15.923941   31154 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:22:15.924081   31154 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 19:22:15.924118   31154 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 19:22:15.924126   31154 certs.go:256] generating profile certs ...
	I1001 19:22:15.924217   31154 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key
	I1001 19:22:15.924242   31154 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.09da423f
	I1001 19:22:15.924256   31154 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.09da423f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.14 192.168.39.27 192.168.39.101 192.168.39.254]
	I1001 19:22:16.102464   31154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.09da423f ...
	I1001 19:22:16.102493   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.09da423f: {Name:mk41b913f57e7f10c713b2e18136c742f7b09ec4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:22:16.102655   31154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.09da423f ...
	I1001 19:22:16.102668   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.09da423f: {Name:mkaf44cea34e6bfbac4ea8c8d70ebec43d2a6d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:22:16.102739   31154 certs.go:381] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.09da423f -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt
	I1001 19:22:16.102870   31154 certs.go:385] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.09da423f -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key
	I1001 19:22:16.102988   31154 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key
	I1001 19:22:16.103003   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 19:22:16.103016   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 19:22:16.103030   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 19:22:16.103042   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 19:22:16.103054   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 19:22:16.103067   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 19:22:16.103081   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 19:22:16.120441   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 19:22:16.120535   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 19:22:16.120569   31154 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 19:22:16.120579   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 19:22:16.120602   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 19:22:16.120624   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 19:22:16.120682   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 19:22:16.120730   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:22:16.120759   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /usr/share/ca-certificates/184302.pem
	I1001 19:22:16.120772   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:22:16.120784   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem -> /usr/share/ca-certificates/18430.pem
	I1001 19:22:16.120814   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:22:16.123512   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:22:16.123983   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:22:16.124012   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:22:16.124198   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:22:16.124425   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:22:16.124611   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:22:16.124747   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:22:16.196684   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1001 19:22:16.201293   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1001 19:22:16.211163   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1001 19:22:16.215061   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1001 19:22:16.225018   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1001 19:22:16.228909   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1001 19:22:16.239430   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1001 19:22:16.243222   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1001 19:22:16.253163   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1001 19:22:16.256929   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1001 19:22:16.266378   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1001 19:22:16.270062   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1001 19:22:16.278964   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 19:22:16.303288   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 19:22:16.326243   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 19:22:16.347460   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 19:22:16.372037   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1001 19:22:16.396287   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 19:22:16.420724   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 19:22:16.445707   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 19:22:16.468539   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 19:22:16.492971   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 19:22:16.517838   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 19:22:16.541960   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1001 19:22:16.557831   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1001 19:22:16.573594   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1001 19:22:16.590168   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1001 19:22:16.607168   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1001 19:22:16.623957   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1001 19:22:16.640438   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1001 19:22:16.655967   31154 ssh_runner.go:195] Run: openssl version
	I1001 19:22:16.661524   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 19:22:16.672376   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 19:22:16.676864   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 19:22:16.676922   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 19:22:16.682647   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 19:22:16.693083   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 19:22:16.703938   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:22:16.708263   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:22:16.708320   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:22:16.714520   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 19:22:16.725249   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 19:22:16.736315   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 19:22:16.741061   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 19:22:16.741120   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 19:22:16.746697   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 19:22:16.757551   31154 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 19:22:16.761481   31154 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 19:22:16.761539   31154 kubeadm.go:934] updating node {m03 192.168.39.101 8443 v1.31.1 crio true true} ...
	I1001 19:22:16.761636   31154 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-193737-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 19:22:16.761666   31154 kube-vip.go:115] generating kube-vip config ...
	I1001 19:22:16.761704   31154 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 19:22:16.778682   31154 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 19:22:16.778755   31154 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1001 19:22:16.778825   31154 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 19:22:16.788174   31154 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1001 19:22:16.788258   31154 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1001 19:22:16.797330   31154 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1001 19:22:16.797360   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 19:22:16.797405   31154 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1001 19:22:16.797420   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 19:22:16.797425   31154 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1001 19:22:16.797452   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 19:22:16.797455   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 19:22:16.797515   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 19:22:16.806983   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1001 19:22:16.807016   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1001 19:22:16.807033   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1001 19:22:16.807064   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1001 19:22:16.822346   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 19:22:16.822450   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 19:22:16.908222   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1001 19:22:16.908266   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1001 19:22:17.718151   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1001 19:22:17.728679   31154 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1001 19:22:17.753493   31154 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 19:22:17.773315   31154 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1001 19:22:17.791404   31154 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 19:22:17.795599   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:22:17.808083   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:22:17.928195   31154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:22:17.944678   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:22:17.945052   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:22:17.945093   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:22:17.962020   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45629
	I1001 19:22:17.962474   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:22:17.962912   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:22:17.962940   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:22:17.963311   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:22:17.963520   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:22:17.963697   31154 start.go:317] joinCluster: &{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:22:17.963861   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1001 19:22:17.963886   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:22:17.967232   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:22:17.967827   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:22:17.967856   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:22:17.968135   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:22:17.968336   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:22:17.968495   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:22:17.968659   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:22:18.133596   31154 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:22:18.133651   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z7cdmg.hjk7kyt30ndw2tea --discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-193737-m03 --control-plane --apiserver-advertise-address=192.168.39.101 --apiserver-bind-port=8443"
	I1001 19:22:41.859086   31154 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z7cdmg.hjk7kyt30ndw2tea --discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-193737-m03 --control-plane --apiserver-advertise-address=192.168.39.101 --apiserver-bind-port=8443": (23.725407283s)
	I1001 19:22:41.859128   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1001 19:22:42.384071   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-193737-m03 minikube.k8s.io/updated_at=2024_10_01T19_22_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=ha-193737 minikube.k8s.io/primary=false
	I1001 19:22:42.510669   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-193737-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1001 19:22:42.641492   31154 start.go:319] duration metric: took 24.67779185s to joinCluster
	I1001 19:22:42.641581   31154 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:22:42.641937   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:22:42.642770   31154 out.go:177] * Verifying Kubernetes components...
	I1001 19:22:42.643798   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:22:42.883720   31154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:22:42.899372   31154 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:22:42.899626   31154 kapi.go:59] client config for ha-193737: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key", CAFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1001 19:22:42.899683   31154 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.14:8443
	I1001 19:22:42.899959   31154 node_ready.go:35] waiting up to 6m0s for node "ha-193737-m03" to be "Ready" ...
	I1001 19:22:42.900040   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:42.900052   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:42.900063   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:42.900071   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:42.904647   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:22:43.401126   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:43.401152   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:43.401163   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:43.401168   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:43.405027   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:43.900824   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:43.900848   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:43.900859   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:43.900868   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:43.904531   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:44.400251   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:44.400272   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:44.400281   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:44.400285   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:44.403517   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:44.901001   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:44.901028   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:44.901036   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:44.901041   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:44.905012   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:44.905575   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:45.400898   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:45.400924   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:45.400935   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:45.400942   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:45.405202   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:22:45.900749   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:45.900772   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:45.900781   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:45.900785   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:45.904505   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:46.400832   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:46.400855   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:46.400865   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:46.400871   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:46.404455   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:46.900834   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:46.900926   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:46.900945   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:46.900955   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:46.907848   31154 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1001 19:22:46.909060   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:47.400619   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:47.400639   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:47.400647   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:47.400651   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:47.404519   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:47.900808   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:47.900835   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:47.900846   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:47.900851   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:48.028121   31154 round_trippers.go:574] Response Status: 200 OK in 127 milliseconds
	I1001 19:22:48.400839   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:48.400859   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:48.400866   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:48.400870   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:48.404198   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:48.900508   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:48.900533   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:48.900544   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:48.900551   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:48.904379   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:49.400836   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:49.400857   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:49.400866   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:49.400870   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:49.403736   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:49.404256   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:49.901034   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:49.901058   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:49.901068   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:49.901073   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:49.905378   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:22:50.400178   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:50.400198   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:50.400206   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:50.400214   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:50.403269   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:50.901215   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:50.901242   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:50.901251   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:50.901256   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:50.905409   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:22:51.400867   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:51.400890   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:51.400899   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:51.400908   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:51.404516   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:51.404962   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:51.900265   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:51.900308   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:51.900315   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:51.900319   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:51.903634   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:52.401178   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:52.401200   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:52.401206   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:52.401211   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:52.404511   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:52.900412   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:52.900432   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:52.900441   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:52.900446   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:52.903570   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:53.400572   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:53.400602   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:53.400614   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:53.400622   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:53.403821   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:53.900178   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:53.900201   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:53.900210   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:53.900214   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:53.903933   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:53.904621   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:54.401040   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:54.401066   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:54.401078   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:54.401085   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:54.404732   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:54.901129   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:54.901154   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:54.901163   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:54.901166   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:54.904547   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:55.400669   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:55.400692   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:55.400700   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:55.400703   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:55.404556   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:55.900944   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:55.900966   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:55.900974   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:55.900977   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:55.904209   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:55.904851   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:56.400513   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:56.400537   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:56.400548   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:56.400554   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:56.403671   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:56.900541   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:56.900564   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:56.900575   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:56.900582   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:56.903726   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:57.400178   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:57.400200   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:57.400209   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:57.400216   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:57.403658   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:57.901131   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:57.901154   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:57.901163   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:57.901169   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:57.904387   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:58.401066   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:58.401087   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:58.401095   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:58.401098   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:58.404875   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:58.405329   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:58.900140   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:58.900160   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:58.900168   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:58.900172   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:58.903081   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.401118   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:59.401143   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.401153   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.401156   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.404480   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:59.405079   31154 node_ready.go:49] node "ha-193737-m03" has status "Ready":"True"
	I1001 19:22:59.405100   31154 node_ready.go:38] duration metric: took 16.505122802s for node "ha-193737-m03" to be "Ready" ...
	I1001 19:22:59.405110   31154 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 19:22:59.405190   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:22:59.405207   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.405217   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.405227   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.412572   31154 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1001 19:22:59.420220   31154 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.420321   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hd5hv
	I1001 19:22:59.420334   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.420345   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.420353   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.423179   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.423949   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:22:59.423964   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.423970   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.423975   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.426304   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.426762   31154 pod_ready.go:93] pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace has status "Ready":"True"
	I1001 19:22:59.426780   31154 pod_ready.go:82] duration metric: took 6.530664ms for pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.426796   31154 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.426857   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-v2wsx
	I1001 19:22:59.426866   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.426876   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.426887   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.429141   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.429823   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:22:59.429840   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.429848   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.429852   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.431860   31154 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 19:22:59.432333   31154 pod_ready.go:93] pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace has status "Ready":"True"
	I1001 19:22:59.432348   31154 pod_ready.go:82] duration metric: took 5.544704ms for pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.432374   31154 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.432437   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737
	I1001 19:22:59.432448   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.432456   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.432459   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.434479   31154 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 19:22:59.435042   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:22:59.435057   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.435063   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.435067   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.437217   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.437787   31154 pod_ready.go:93] pod "etcd-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:22:59.437803   31154 pod_ready.go:82] duration metric: took 5.420394ms for pod "etcd-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.437813   31154 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.437864   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737-m02
	I1001 19:22:59.437874   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.437883   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.437892   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.440631   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.441277   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:22:59.441295   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.441316   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.441325   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.448195   31154 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1001 19:22:59.448905   31154 pod_ready.go:93] pod "etcd-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:22:59.448925   31154 pod_ready.go:82] duration metric: took 11.104591ms for pod "etcd-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.448938   31154 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.601259   31154 request.go:632] Waited for 152.231969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737-m03
	I1001 19:22:59.601316   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737-m03
	I1001 19:22:59.601321   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.601329   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.601333   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.604878   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:59.801921   31154 request.go:632] Waited for 196.382761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:59.802008   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:59.802021   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.802031   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.802037   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.805203   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:59.806083   31154 pod_ready.go:93] pod "etcd-ha-193737-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 19:22:59.806103   31154 pod_ready.go:82] duration metric: took 357.156614ms for pod "etcd-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.806134   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:00.001202   31154 request.go:632] Waited for 194.974996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737
	I1001 19:23:00.001255   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737
	I1001 19:23:00.001260   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:00.001267   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:00.001271   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:00.005307   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:23:00.201989   31154 request.go:632] Waited for 195.321685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:00.202114   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:00.202132   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:00.202146   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:00.202158   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:00.205788   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:00.206508   31154 pod_ready.go:93] pod "kube-apiserver-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:00.206529   31154 pod_ready.go:82] duration metric: took 400.381151ms for pod "kube-apiserver-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:00.206541   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:00.401602   31154 request.go:632] Waited for 194.993098ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m02
	I1001 19:23:00.401663   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m02
	I1001 19:23:00.401668   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:00.401676   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:00.401680   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:00.405450   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:00.601599   31154 request.go:632] Waited for 195.316962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:00.601692   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:00.601700   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:00.601707   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:00.601711   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:00.605188   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:00.605660   31154 pod_ready.go:93] pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:00.605679   31154 pod_ready.go:82] duration metric: took 399.130829ms for pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:00.605688   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:00.801836   31154 request.go:632] Waited for 196.081559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m03
	I1001 19:23:00.801903   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m03
	I1001 19:23:00.801908   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:00.801926   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:00.801931   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:00.805500   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.001996   31154 request.go:632] Waited for 195.706291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:01.002060   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:01.002068   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:01.002082   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:01.002090   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:01.005674   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.006438   31154 pod_ready.go:93] pod "kube-apiserver-ha-193737-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:01.006466   31154 pod_ready.go:82] duration metric: took 400.769669ms for pod "kube-apiserver-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:01.006480   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:01.201564   31154 request.go:632] Waited for 195.007953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737
	I1001 19:23:01.201618   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737
	I1001 19:23:01.201623   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:01.201630   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:01.201634   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:01.204998   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.402159   31154 request.go:632] Waited for 196.410696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:01.402225   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:01.402232   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:01.402243   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:01.402250   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:01.405639   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.406259   31154 pod_ready.go:93] pod "kube-controller-manager-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:01.406284   31154 pod_ready.go:82] duration metric: took 399.796485ms for pod "kube-controller-manager-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:01.406298   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:01.601556   31154 request.go:632] Waited for 195.171182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m02
	I1001 19:23:01.601629   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m02
	I1001 19:23:01.601638   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:01.601646   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:01.601655   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:01.605271   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.801581   31154 request.go:632] Waited for 195.404456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:01.801644   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:01.801651   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:01.801662   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:01.801669   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:01.805042   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.805673   31154 pod_ready.go:93] pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:01.805694   31154 pod_ready.go:82] duration metric: took 399.387622ms for pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:01.805707   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:02.001904   31154 request.go:632] Waited for 195.994245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m03
	I1001 19:23:02.002040   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m03
	I1001 19:23:02.002064   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:02.002075   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:02.002080   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:02.005612   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:02.201553   31154 request.go:632] Waited for 195.185972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:02.201606   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:02.201612   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:02.201628   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:02.201645   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:02.205018   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:02.205533   31154 pod_ready.go:93] pod "kube-controller-manager-ha-193737-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:02.205552   31154 pod_ready.go:82] duration metric: took 399.838551ms for pod "kube-controller-manager-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:02.205563   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4294m" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:02.401983   31154 request.go:632] Waited for 196.357491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4294m
	I1001 19:23:02.402038   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4294m
	I1001 19:23:02.402043   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:02.402049   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:02.402054   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:02.405225   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:02.601208   31154 request.go:632] Waited for 195.289332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:02.601293   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:02.601304   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:02.601316   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:02.601328   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:02.604768   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:02.605212   31154 pod_ready.go:93] pod "kube-proxy-4294m" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:02.605230   31154 pod_ready.go:82] duration metric: took 399.66052ms for pod "kube-proxy-4294m" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:02.605242   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9pm4t" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:02.801359   31154 request.go:632] Waited for 196.035084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9pm4t
	I1001 19:23:02.801440   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9pm4t
	I1001 19:23:02.801448   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:02.801462   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:02.801473   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:02.804772   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:03.001444   31154 request.go:632] Waited for 196.042411ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:03.001517   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:03.001522   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:03.001536   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:03.001543   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:03.005199   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:03.005738   31154 pod_ready.go:93] pod "kube-proxy-9pm4t" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:03.005763   31154 pod_ready.go:82] duration metric: took 400.510951ms for pod "kube-proxy-9pm4t" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:03.005773   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zpsll" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:03.201543   31154 request.go:632] Waited for 195.704518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zpsll
	I1001 19:23:03.201618   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zpsll
	I1001 19:23:03.201627   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:03.201634   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:03.201639   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:03.204535   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:23:03.401528   31154 request.go:632] Waited for 196.292025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:03.401585   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:03.401590   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:03.401597   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:03.401602   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:03.405338   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:03.406008   31154 pod_ready.go:93] pod "kube-proxy-zpsll" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:03.406025   31154 pod_ready.go:82] duration metric: took 400.246215ms for pod "kube-proxy-zpsll" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:03.406035   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:03.601668   31154 request.go:632] Waited for 195.548834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737
	I1001 19:23:03.601752   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737
	I1001 19:23:03.601760   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:03.601772   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:03.601779   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:03.605345   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:03.801308   31154 request.go:632] Waited for 195.294104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:03.801403   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:03.801417   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:03.801427   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:03.801434   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:03.804468   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:03.805276   31154 pod_ready.go:93] pod "kube-scheduler-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:03.805293   31154 pod_ready.go:82] duration metric: took 399.251767ms for pod "kube-scheduler-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:03.805303   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:04.001445   31154 request.go:632] Waited for 196.067713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m02
	I1001 19:23:04.001522   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m02
	I1001 19:23:04.001531   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.001541   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.001548   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.004705   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:04.201792   31154 request.go:632] Waited for 196.362451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:04.201872   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:04.201879   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.201889   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.201897   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.205376   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:04.206212   31154 pod_ready.go:93] pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:04.206235   31154 pod_ready.go:82] duration metric: took 400.923668ms for pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:04.206250   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:04.401166   31154 request.go:632] Waited for 194.837724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m03
	I1001 19:23:04.401244   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m03
	I1001 19:23:04.401252   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.401266   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.401273   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.404292   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:23:04.601244   31154 request.go:632] Waited for 196.299344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:04.601300   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:04.601306   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.601313   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.601317   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.604470   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:04.605038   31154 pod_ready.go:93] pod "kube-scheduler-ha-193737-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:04.605055   31154 pod_ready.go:82] duration metric: took 398.796981ms for pod "kube-scheduler-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:04.605065   31154 pod_ready.go:39] duration metric: took 5.199943212s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 19:23:04.605079   31154 api_server.go:52] waiting for apiserver process to appear ...
	I1001 19:23:04.605144   31154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 19:23:04.623271   31154 api_server.go:72] duration metric: took 21.981652881s to wait for apiserver process to appear ...
	I1001 19:23:04.623293   31154 api_server.go:88] waiting for apiserver healthz status ...
	I1001 19:23:04.623314   31154 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I1001 19:23:04.631212   31154 api_server.go:279] https://192.168.39.14:8443/healthz returned 200:
	ok
	I1001 19:23:04.631285   31154 round_trippers.go:463] GET https://192.168.39.14:8443/version
	I1001 19:23:04.631295   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.631303   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.631310   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.632155   31154 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1001 19:23:04.632226   31154 api_server.go:141] control plane version: v1.31.1
	I1001 19:23:04.632243   31154 api_server.go:131] duration metric: took 8.942184ms to wait for apiserver health ...
	I1001 19:23:04.632254   31154 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 19:23:04.801981   31154 request.go:632] Waited for 169.64915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:23:04.802068   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:23:04.802079   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.802090   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.802102   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.809502   31154 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1001 19:23:04.815901   31154 system_pods.go:59] 24 kube-system pods found
	I1001 19:23:04.815930   31154 system_pods.go:61] "coredns-7c65d6cfc9-hd5hv" [31f0afff-5571-46d6-888f-8982c71ba191] Running
	I1001 19:23:04.815935   31154 system_pods.go:61] "coredns-7c65d6cfc9-v2wsx" [8e3dd318-5017-4ada-bf2f-61b640ee2030] Running
	I1001 19:23:04.815939   31154 system_pods.go:61] "etcd-ha-193737" [99c3674d-50d6-4160-89dd-afb3c2e71039] Running
	I1001 19:23:04.815943   31154 system_pods.go:61] "etcd-ha-193737-m02" [a541b9c8-c10e-45b4-ac9c-d16e8bf659b0] Running
	I1001 19:23:04.815946   31154 system_pods.go:61] "etcd-ha-193737-m03" [de61043b-ff4c-4d28-ab01-d63abf25ef30] Running
	I1001 19:23:04.815949   31154 system_pods.go:61] "kindnet-bqht8" [3cef1863-ae14-4ab4-bc4f-5545e058cc9c] Running
	I1001 19:23:04.815953   31154 system_pods.go:61] "kindnet-drdlr" [13177890-a0eb-47ff-8fe2-585992810e47] Running
	I1001 19:23:04.815955   31154 system_pods.go:61] "kindnet-wnr6g" [89e11419-0c5c-486e-bdbf-eaf6fab1e62c] Running
	I1001 19:23:04.815958   31154 system_pods.go:61] "kube-apiserver-ha-193737" [432bf6a6-4607-4f55-a026-d910075d3145] Running
	I1001 19:23:04.815961   31154 system_pods.go:61] "kube-apiserver-ha-193737-m02" [379a24af-2148-4870-9f4b-25ad48943445] Running
	I1001 19:23:04.815964   31154 system_pods.go:61] "kube-apiserver-ha-193737-m03" [fbf7fbec-142d-4402-9bcc-c3e25e11ac2e] Running
	I1001 19:23:04.815968   31154 system_pods.go:61] "kube-controller-manager-ha-193737" [95fb44db-8cb3-4db9-8a84-ab82e15760de] Running
	I1001 19:23:04.815971   31154 system_pods.go:61] "kube-controller-manager-ha-193737-m02" [5b0821e0-673a-440c-8ca1-fefbfe913b94] Running
	I1001 19:23:04.815974   31154 system_pods.go:61] "kube-controller-manager-ha-193737-m03" [fd854d14-6abb-42eb-b560-e816e86c6767] Running
	I1001 19:23:04.815981   31154 system_pods.go:61] "kube-proxy-4294m" [f454ca0f-d662-4dfd-ab77-ebccaf3a6b12] Running
	I1001 19:23:04.815987   31154 system_pods.go:61] "kube-proxy-9pm4t" [5dba191b-ba4a-4a22-80df-65afd1dcbfb5] Running
	I1001 19:23:04.815989   31154 system_pods.go:61] "kube-proxy-zpsll" [c18fec3c-2880-4860-b220-a44d5e523bed] Running
	I1001 19:23:04.815998   31154 system_pods.go:61] "kube-scheduler-ha-193737" [46ca2b37-6145-4111-9088-bcf51307be3a] Running
	I1001 19:23:04.816002   31154 system_pods.go:61] "kube-scheduler-ha-193737-m02" [72be6cd5-d226-46d6-b675-99014e544dfb] Running
	I1001 19:23:04.816005   31154 system_pods.go:61] "kube-scheduler-ha-193737-m03" [129167e7-febe-4de3-a35f-3f0e668c7a77] Running
	I1001 19:23:04.816008   31154 system_pods.go:61] "kube-vip-ha-193737" [cbe8e6a4-08f3-4db3-af4d-810a5592597c] Running
	I1001 19:23:04.816014   31154 system_pods.go:61] "kube-vip-ha-193737-m02" [da7dfa59-1b27-45ce-9e61-ad8da40ec548] Running
	I1001 19:23:04.816017   31154 system_pods.go:61] "kube-vip-ha-193737-m03" [7a9bbd2f-8b9a-4104-baf4-11efdd662028] Running
	I1001 19:23:04.816022   31154 system_pods.go:61] "storage-provisioner" [d5b587a6-418b-47e5-9bf7-3fb6fa5e3372] Running
	I1001 19:23:04.816027   31154 system_pods.go:74] duration metric: took 183.765578ms to wait for pod list to return data ...
	I1001 19:23:04.816036   31154 default_sa.go:34] waiting for default service account to be created ...
	I1001 19:23:05.001464   31154 request.go:632] Waited for 185.352635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
	I1001 19:23:05.001522   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
	I1001 19:23:05.001527   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:05.001534   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:05.001538   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:05.005437   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:05.005559   31154 default_sa.go:45] found service account: "default"
	I1001 19:23:05.005576   31154 default_sa.go:55] duration metric: took 189.530453ms for default service account to be created ...
	I1001 19:23:05.005589   31154 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 19:23:05.201939   31154 request.go:632] Waited for 196.276664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:23:05.201999   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:23:05.202009   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:05.202018   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:05.202026   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:05.208844   31154 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1001 19:23:05.215522   31154 system_pods.go:86] 24 kube-system pods found
	I1001 19:23:05.215551   31154 system_pods.go:89] "coredns-7c65d6cfc9-hd5hv" [31f0afff-5571-46d6-888f-8982c71ba191] Running
	I1001 19:23:05.215559   31154 system_pods.go:89] "coredns-7c65d6cfc9-v2wsx" [8e3dd318-5017-4ada-bf2f-61b640ee2030] Running
	I1001 19:23:05.215563   31154 system_pods.go:89] "etcd-ha-193737" [99c3674d-50d6-4160-89dd-afb3c2e71039] Running
	I1001 19:23:05.215567   31154 system_pods.go:89] "etcd-ha-193737-m02" [a541b9c8-c10e-45b4-ac9c-d16e8bf659b0] Running
	I1001 19:23:05.215570   31154 system_pods.go:89] "etcd-ha-193737-m03" [de61043b-ff4c-4d28-ab01-d63abf25ef30] Running
	I1001 19:23:05.215574   31154 system_pods.go:89] "kindnet-bqht8" [3cef1863-ae14-4ab4-bc4f-5545e058cc9c] Running
	I1001 19:23:05.215578   31154 system_pods.go:89] "kindnet-drdlr" [13177890-a0eb-47ff-8fe2-585992810e47] Running
	I1001 19:23:05.215581   31154 system_pods.go:89] "kindnet-wnr6g" [89e11419-0c5c-486e-bdbf-eaf6fab1e62c] Running
	I1001 19:23:05.215584   31154 system_pods.go:89] "kube-apiserver-ha-193737" [432bf6a6-4607-4f55-a026-d910075d3145] Running
	I1001 19:23:05.215588   31154 system_pods.go:89] "kube-apiserver-ha-193737-m02" [379a24af-2148-4870-9f4b-25ad48943445] Running
	I1001 19:23:05.215591   31154 system_pods.go:89] "kube-apiserver-ha-193737-m03" [fbf7fbec-142d-4402-9bcc-c3e25e11ac2e] Running
	I1001 19:23:05.215595   31154 system_pods.go:89] "kube-controller-manager-ha-193737" [95fb44db-8cb3-4db9-8a84-ab82e15760de] Running
	I1001 19:23:05.215598   31154 system_pods.go:89] "kube-controller-manager-ha-193737-m02" [5b0821e0-673a-440c-8ca1-fefbfe913b94] Running
	I1001 19:23:05.215601   31154 system_pods.go:89] "kube-controller-manager-ha-193737-m03" [fd854d14-6abb-42eb-b560-e816e86c6767] Running
	I1001 19:23:05.215603   31154 system_pods.go:89] "kube-proxy-4294m" [f454ca0f-d662-4dfd-ab77-ebccaf3a6b12] Running
	I1001 19:23:05.215606   31154 system_pods.go:89] "kube-proxy-9pm4t" [5dba191b-ba4a-4a22-80df-65afd1dcbfb5] Running
	I1001 19:23:05.215609   31154 system_pods.go:89] "kube-proxy-zpsll" [c18fec3c-2880-4860-b220-a44d5e523bed] Running
	I1001 19:23:05.215613   31154 system_pods.go:89] "kube-scheduler-ha-193737" [46ca2b37-6145-4111-9088-bcf51307be3a] Running
	I1001 19:23:05.215616   31154 system_pods.go:89] "kube-scheduler-ha-193737-m02" [72be6cd5-d226-46d6-b675-99014e544dfb] Running
	I1001 19:23:05.215621   31154 system_pods.go:89] "kube-scheduler-ha-193737-m03" [129167e7-febe-4de3-a35f-3f0e668c7a77] Running
	I1001 19:23:05.215626   31154 system_pods.go:89] "kube-vip-ha-193737" [cbe8e6a4-08f3-4db3-af4d-810a5592597c] Running
	I1001 19:23:05.215630   31154 system_pods.go:89] "kube-vip-ha-193737-m02" [da7dfa59-1b27-45ce-9e61-ad8da40ec548] Running
	I1001 19:23:05.215634   31154 system_pods.go:89] "kube-vip-ha-193737-m03" [7a9bbd2f-8b9a-4104-baf4-11efdd662028] Running
	I1001 19:23:05.215639   31154 system_pods.go:89] "storage-provisioner" [d5b587a6-418b-47e5-9bf7-3fb6fa5e3372] Running
	I1001 19:23:05.215647   31154 system_pods.go:126] duration metric: took 210.049347ms to wait for k8s-apps to be running ...
	I1001 19:23:05.215659   31154 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 19:23:05.215714   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 19:23:05.232730   31154 system_svc.go:56] duration metric: took 17.059785ms WaitForService to wait for kubelet
	I1001 19:23:05.232757   31154 kubeadm.go:582] duration metric: took 22.59114375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 19:23:05.232773   31154 node_conditions.go:102] verifying NodePressure condition ...
	I1001 19:23:05.401103   31154 request.go:632] Waited for 168.256226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes
	I1001 19:23:05.401154   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes
	I1001 19:23:05.401159   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:05.401165   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:05.401169   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:05.405382   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:23:05.406740   31154 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 19:23:05.406763   31154 node_conditions.go:123] node cpu capacity is 2
	I1001 19:23:05.406777   31154 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 19:23:05.406783   31154 node_conditions.go:123] node cpu capacity is 2
	I1001 19:23:05.406789   31154 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 19:23:05.406794   31154 node_conditions.go:123] node cpu capacity is 2
	I1001 19:23:05.406799   31154 node_conditions.go:105] duration metric: took 174.020761ms to run NodePressure ...
	I1001 19:23:05.406816   31154 start.go:241] waiting for startup goroutines ...
	I1001 19:23:05.406842   31154 start.go:255] writing updated cluster config ...
	I1001 19:23:05.407176   31154 ssh_runner.go:195] Run: rm -f paused
	I1001 19:23:05.459358   31154 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 19:23:05.461856   31154 out.go:177] * Done! kubectl is now configured to use "ha-193737" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.016050150Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-rbjkx,Uid:ba3ecbe1-fb88-4674-b679-a442b28cd68e,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727810586682758033,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-01T19:23:06.356548410Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7ea8efe8e5b7908a13d9ad3be6c8e7a9e871b332f5889126ea13429055eaaed3,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1727810449150584704,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-01T19:20:48.833089109Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-v2wsx,Uid:8e3dd318-5017-4ada-bf2f-61b640ee2030,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727810449146909574,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-5017-4ada-bf2f-61b640ee2030,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-01T19:20:48.833790629Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-hd5hv,Uid:31f0afff-5571-46d6-888f-8982c71ba191,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1727810449136545895,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-01T19:20:48.824987880Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&PodSandboxMetadata{Name:kindnet-wnr6g,Uid:89e11419-0c5c-486e-bdbf-eaf6fab1e62c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727810436813914354,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-10-01T19:20:35.888519006Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&PodSandboxMetadata{Name:kube-proxy-zpsll,Uid:c18fec3c-2880-4860-b220-a44d5e523bed,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727810436811137861,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-01T19:20:35.894320364Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cb787d15fa3b89fa5c4a479def2aed57fb03a1d1abdf07f6170488ae8f5b774f,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-193737,Uid:00cf6ac3eb69fe181eb29ee323afb176,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1727810424463689607,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cf6ac3eb69fe181eb29ee323afb176,},Annotations:map[string]string{kubernetes.io/config.hash: 00cf6ac3eb69fe181eb29ee323afb176,kubernetes.io/config.seen: 2024-10-01T19:20:23.971420116Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f74fa319889b0624f5157b5a226af0e5a24f37f6922332d6e0037af95aa50aef,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-193737,Uid:26cd510d04d444e2a3fd26699f0dbb26,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727810424458185869,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apis
erver.advertise-address.endpoint: 192.168.39.14:8443,kubernetes.io/config.hash: 26cd510d04d444e2a3fd26699f0dbb26,kubernetes.io/config.seen: 2024-10-01T19:20:23.971416640Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-193737,Uid:0322ee97040a2f569785dff412cf907f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727810424450474160,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0322ee97040a2f569785dff412cf907f,kubernetes.io/config.seen: 2024-10-01T19:20:23.971419282Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d6e9deea0a8069c598634405f46a9fa565d1d0b888716c0c94e4ed5b44588a10,Meta
data:&PodSandboxMetadata{Name:kube-controller-manager-ha-193737,Uid:de600bfbca1d9c3f01fa833eb2f872cd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727810424450223399,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: de600bfbca1d9c3f01fa833eb2f872cd,kubernetes.io/config.seen: 2024-10-01T19:20:23.971418215Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&PodSandboxMetadata{Name:etcd-ha-193737,Uid:b7769b1af58540331dfe5effd67e84a0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727810424434200231,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-193737,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.14:2379,kubernetes.io/config.hash: b7769b1af58540331dfe5effd67e84a0,kubernetes.io/config.seen: 2024-10-01T19:20:23.971412372Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f4506f96-3258-4d1f-862f-671c9fd24af1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.017407982Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b9cd8a2-211d-470a-a059-6faa9d60c3e2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.017471947Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b9cd8a2-211d-470a-a059-6faa9d60c3e2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.018095322Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727810590371295276,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75485355206ed8610939d128b62d0d55ec84cbb8615f7261469f854e52b6447d,PodSandboxId:7ea8efe8e5b7908a13d9ad3be6c8e7a9e871b332f5889126ea13429055eaaed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727810449416160580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449360614251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449354501899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-50
17-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278104
37213853474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727810437061931545,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c962c4138a0019c76f981b72e8948efd386403002bb686f7e70c38bc20c3d542,PodSandboxId:cb787d15fa3b89fa5c4a479def2aed57fb03a1d1abdf07f6170488ae8f5b774f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727810427447788769,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cf6ac3eb69fe181eb29ee323afb176,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727810424745051374,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727810424759083818,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c57920320eb65f4477cbff8aec818a7ecddb3461a77ca111172e4d1a5f7e71,PodSandboxId:f74fa319889b0624f5157b5a226af0e5a24f37f6922332d6e0037af95aa50aef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727810424668320387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9d05172b801a89ec36c0d0d549bfeee1aafb67f6586e3f4f163cb63f01d062,PodSandboxId:d6e9deea0a8069c598634405f46a9fa565d1d0b888716c0c94e4ed5b44588a10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727810424633610117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b9cd8a2-211d-470a-a059-6faa9d60c3e2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.036043177Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ea19bc2a-5973-4f2b-b926-e35d061e7a71 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.036120744Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ea19bc2a-5973-4f2b-b926-e35d061e7a71 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.037170070Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=898a1203-83e6-4b25-8ee5-217c2b32fd24 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.037569599Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810807037550347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=898a1203-83e6-4b25-8ee5-217c2b32fd24 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.038113216Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc1d8f10-9dde-4b20-b188-5b76774d35e2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.038165581Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc1d8f10-9dde-4b20-b188-5b76774d35e2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.038447261Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727810590371295276,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75485355206ed8610939d128b62d0d55ec84cbb8615f7261469f854e52b6447d,PodSandboxId:7ea8efe8e5b7908a13d9ad3be6c8e7a9e871b332f5889126ea13429055eaaed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727810449416160580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449360614251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449354501899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-50
17-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278104
37213853474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727810437061931545,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c962c4138a0019c76f981b72e8948efd386403002bb686f7e70c38bc20c3d542,PodSandboxId:cb787d15fa3b89fa5c4a479def2aed57fb03a1d1abdf07f6170488ae8f5b774f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727810427447788769,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cf6ac3eb69fe181eb29ee323afb176,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727810424745051374,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727810424759083818,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c57920320eb65f4477cbff8aec818a7ecddb3461a77ca111172e4d1a5f7e71,PodSandboxId:f74fa319889b0624f5157b5a226af0e5a24f37f6922332d6e0037af95aa50aef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727810424668320387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9d05172b801a89ec36c0d0d549bfeee1aafb67f6586e3f4f163cb63f01d062,PodSandboxId:d6e9deea0a8069c598634405f46a9fa565d1d0b888716c0c94e4ed5b44588a10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727810424633610117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc1d8f10-9dde-4b20-b188-5b76774d35e2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.074974732Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2befbae5-c6fe-4d8f-baee-e8e61a8442b4 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.075050120Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2befbae5-c6fe-4d8f-baee-e8e61a8442b4 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.076455040Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=18b22f16-d99c-437d-8bfc-36d383cf5dcb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.076973902Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810807076947834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=18b22f16-d99c-437d-8bfc-36d383cf5dcb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.077582607Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=361192ef-c28d-4ee4-82df-67105e4b251a name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.077638503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=361192ef-c28d-4ee4-82df-67105e4b251a name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.078041474Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727810590371295276,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75485355206ed8610939d128b62d0d55ec84cbb8615f7261469f854e52b6447d,PodSandboxId:7ea8efe8e5b7908a13d9ad3be6c8e7a9e871b332f5889126ea13429055eaaed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727810449416160580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449360614251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449354501899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-50
17-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278104
37213853474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727810437061931545,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c962c4138a0019c76f981b72e8948efd386403002bb686f7e70c38bc20c3d542,PodSandboxId:cb787d15fa3b89fa5c4a479def2aed57fb03a1d1abdf07f6170488ae8f5b774f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727810427447788769,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cf6ac3eb69fe181eb29ee323afb176,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727810424745051374,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727810424759083818,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c57920320eb65f4477cbff8aec818a7ecddb3461a77ca111172e4d1a5f7e71,PodSandboxId:f74fa319889b0624f5157b5a226af0e5a24f37f6922332d6e0037af95aa50aef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727810424668320387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9d05172b801a89ec36c0d0d549bfeee1aafb67f6586e3f4f163cb63f01d062,PodSandboxId:d6e9deea0a8069c598634405f46a9fa565d1d0b888716c0c94e4ed5b44588a10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727810424633610117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=361192ef-c28d-4ee4-82df-67105e4b251a name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.117319282Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8bffa2a2-eac5-46b5-884b-7575871dee0c name=/runtime.v1.RuntimeService/Version
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.117405951Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8bffa2a2-eac5-46b5-884b-7575871dee0c name=/runtime.v1.RuntimeService/Version
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.118553124Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b0ffa462-d97a-47ef-afc1-12f3f0bbfb60 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.119137431Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810807119108790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0ffa462-d97a-47ef-afc1-12f3f0bbfb60 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.119684092Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8774e0b0-d3c1-4f15-9fc9-94260c5af2df name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.119773499Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8774e0b0-d3c1-4f15-9fc9-94260c5af2df name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:47 ha-193737 crio[661]: time="2024-10-01 19:26:47.120012283Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727810590371295276,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75485355206ed8610939d128b62d0d55ec84cbb8615f7261469f854e52b6447d,PodSandboxId:7ea8efe8e5b7908a13d9ad3be6c8e7a9e871b332f5889126ea13429055eaaed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727810449416160580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449360614251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449354501899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-50
17-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278104
37213853474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727810437061931545,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c962c4138a0019c76f981b72e8948efd386403002bb686f7e70c38bc20c3d542,PodSandboxId:cb787d15fa3b89fa5c4a479def2aed57fb03a1d1abdf07f6170488ae8f5b774f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727810427447788769,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cf6ac3eb69fe181eb29ee323afb176,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727810424745051374,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727810424759083818,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c57920320eb65f4477cbff8aec818a7ecddb3461a77ca111172e4d1a5f7e71,PodSandboxId:f74fa319889b0624f5157b5a226af0e5a24f37f6922332d6e0037af95aa50aef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727810424668320387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9d05172b801a89ec36c0d0d549bfeee1aafb67f6586e3f4f163cb63f01d062,PodSandboxId:d6e9deea0a8069c598634405f46a9fa565d1d0b888716c0c94e4ed5b44588a10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727810424633610117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8774e0b0-d3c1-4f15-9fc9-94260c5af2df name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d523f1298c385       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   8ddf36dc2effd       busybox-7dff88458-rbjkx
	75485355206ed       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   7ea8efe8e5b79       storage-provisioner
	b9a32cfd9baec       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   b4ab4980fd9c6       coredns-7c65d6cfc9-hd5hv
	c598f8345f1d8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   69e4ceb6e3399       coredns-7c65d6cfc9-v2wsx
	25b91984e532b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   f7fcfb918d1fd       kindnet-wnr6g
	6ce5a1ca06729       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   65474abfbeabf       kube-proxy-zpsll
	c962c4138a001       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   cb787d15fa3b8       kube-vip-ha-193737
	7092a3841df08       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   c74bc4df7851a       etcd-ha-193737
	d7d722793679c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   4873897c8ffd7       kube-scheduler-ha-193737
	d2c57920320eb       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   f74fa319889b0       kube-apiserver-ha-193737
	fc9d05172b801       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   d6e9deea0a806       kube-controller-manager-ha-193737
	
	
	==> coredns [b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3] <==
	[INFO] 10.244.1.2:43526 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.003536908s
	[INFO] 10.244.1.2:59594 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.012224538s
	[INFO] 10.244.2.2:37785 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000112105s
	[INFO] 10.244.0.4:34398 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000118394s
	[INFO] 10.244.0.4:35218 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001965777s
	[INFO] 10.244.1.2:56827 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00018086s
	[INFO] 10.244.1.2:50439 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003922693s
	[INFO] 10.244.2.2:33611 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123417s
	[INFO] 10.244.2.2:37877 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204398s
	[INFO] 10.244.2.2:42894 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164711s
	[INFO] 10.244.0.4:58512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012749s
	[INFO] 10.244.0.4:60496 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126088s
	[INFO] 10.244.0.4:42876 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054151s
	[INFO] 10.244.0.4:46048 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001023388s
	[INFO] 10.244.0.4:45307 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069619s
	[INFO] 10.244.0.4:54830 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086737s
	[INFO] 10.244.1.2:56566 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104818s
	[INFO] 10.244.2.2:44960 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017462s
	[INFO] 10.244.2.2:35520 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000147677s
	[INFO] 10.244.0.4:34887 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000089068s
	[INFO] 10.244.0.4:47038 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093137s
	[INFO] 10.244.1.2:44935 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000181924s
	[INFO] 10.244.2.2:51593 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000184246s
	[INFO] 10.244.2.2:37070 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101666s
	[INFO] 10.244.0.4:49420 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115127s
	
	
	==> coredns [c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a] <==
	[INFO] 10.244.1.2:42880 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000139838s
	[INFO] 10.244.1.2:41832 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000162686s
	[INFO] 10.244.1.2:46697 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110911s
	[INFO] 10.244.2.2:37495 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001830157s
	[INFO] 10.244.2.2:39183 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155283s
	[INFO] 10.244.2.2:47614 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170182s
	[INFO] 10.244.2.2:52937 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001095974s
	[INFO] 10.244.2.2:59751 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106474s
	[INFO] 10.244.0.4:55786 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001514187s
	[INFO] 10.244.0.4:56387 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000050769s
	[INFO] 10.244.1.2:54787 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013733s
	[INFO] 10.244.1.2:58281 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113165s
	[INFO] 10.244.1.2:48712 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097722s
	[INFO] 10.244.2.2:57237 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152523s
	[INFO] 10.244.2.2:47314 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106445s
	[INFO] 10.244.0.4:43887 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000199016s
	[INFO] 10.244.0.4:49901 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000240769s
	[INFO] 10.244.1.2:54100 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210259s
	[INFO] 10.244.1.2:60342 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000221646s
	[INFO] 10.244.1.2:33783 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000165277s
	[INFO] 10.244.2.2:45378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197846s
	[INFO] 10.244.2.2:33324 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101556s
	[INFO] 10.244.0.4:40016 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000071122s
	[INFO] 10.244.0.4:40114 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135338s
	[INFO] 10.244.0.4:53904 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00006854s
	
	
	==> describe nodes <==
	Name:               ha-193737
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-193737
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=ha-193737
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T19_20_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:20:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-193737
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:26:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:23:34 +0000   Tue, 01 Oct 2024 19:20:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:23:34 +0000   Tue, 01 Oct 2024 19:20:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:23:34 +0000   Tue, 01 Oct 2024 19:20:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:23:34 +0000   Tue, 01 Oct 2024 19:20:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    ha-193737
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 008c1ccd624b4ab3b90055ff9f65b018
	  System UUID:                008c1ccd-624b-4ab3-b900-55ff9f65b018
	  Boot ID:                    ad12c9f1-7a18-4d35-9ec9-00d91da3365b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rbjkx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 coredns-7c65d6cfc9-hd5hv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m11s
	  kube-system                 coredns-7c65d6cfc9-v2wsx             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m11s
	  kube-system                 etcd-ha-193737                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m16s
	  kube-system                 kindnet-wnr6g                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m12s
	  kube-system                 kube-apiserver-ha-193737             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-controller-manager-ha-193737    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-proxy-zpsll                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 kube-scheduler-ha-193737             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-vip-ha-193737                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m9s                   kube-proxy       
	  Normal  Starting                 6m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     6m23s (x7 over 6m24s)  kubelet          Node ha-193737 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m23s (x8 over 6m24s)  kubelet          Node ha-193737 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m23s (x8 over 6m24s)  kubelet          Node ha-193737 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  6m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m16s                  kubelet          Node ha-193737 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m16s                  kubelet          Node ha-193737 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m16s                  kubelet          Node ha-193737 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m12s                  node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	  Normal  NodeReady                5m59s                  kubelet          Node ha-193737 status is now: NodeReady
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	  Normal  RegisteredNode           4m1s                   node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	
	
	Name:               ha-193737-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-193737-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=ha-193737
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T19_21_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:21:23 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-193737-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:24:17 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 01 Oct 2024 19:23:25 +0000   Tue, 01 Oct 2024 19:25:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 01 Oct 2024 19:23:25 +0000   Tue, 01 Oct 2024 19:25:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 01 Oct 2024 19:23:25 +0000   Tue, 01 Oct 2024 19:25:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 01 Oct 2024 19:23:25 +0000   Tue, 01 Oct 2024 19:25:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-193737-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e20c76476d7c4acaa5fd75e5b8fa3bab
	  System UUID:                e20c7647-6d7c-4aca-a5fd-75e5b8fa3bab
	  Boot ID:                    6ae84c19-5df4-457f-b75c-eae86d5e0ee1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fz5bb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-ha-193737-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m22s
	  kube-system                 kindnet-drdlr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m24s
	  kube-system                 kube-apiserver-ha-193737-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-controller-manager-ha-193737-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-proxy-4294m                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-scheduler-ha-193737-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-vip-ha-193737-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m24s (x8 over 5m24s)  kubelet          Node ha-193737-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m24s (x8 over 5m24s)  kubelet          Node ha-193737-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m24s (x7 over 5m24s)  kubelet          Node ha-193737-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m22s                  node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	  Normal  RegisteredNode           4m1s                   node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	  Normal  NodeNotReady             107s                   node-controller  Node ha-193737-m02 status is now: NodeNotReady
	
	
	Name:               ha-193737-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-193737-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=ha-193737
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T19_22_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:22:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-193737-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:26:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:23:39 +0000   Tue, 01 Oct 2024 19:22:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:23:39 +0000   Tue, 01 Oct 2024 19:22:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:23:39 +0000   Tue, 01 Oct 2024 19:22:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:23:39 +0000   Tue, 01 Oct 2024 19:22:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-193737-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f175e16bf19e4217880e926a75ac0065
	  System UUID:                f175e16b-f19e-4217-880e-926a75ac0065
	  Boot ID:                    5dc1c664-a01d-46eb-a066-a1970597b392
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qzzzv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-ha-193737-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m7s
	  kube-system                 kindnet-bqht8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m9s
	  kube-system                 kube-apiserver-ha-193737-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-controller-manager-ha-193737-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-proxy-9pm4t                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-scheduler-ha-193737-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-vip-ha-193737-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m4s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m9s)  kubelet          Node ha-193737-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m9s)  kubelet          Node ha-193737-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m9s)  kubelet          Node ha-193737-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m7s                 node-controller  Node ha-193737-m03 event: Registered Node ha-193737-m03 in Controller
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-193737-m03 event: Registered Node ha-193737-m03 in Controller
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-193737-m03 event: Registered Node ha-193737-m03 in Controller
	
	
	Name:               ha-193737-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-193737-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=ha-193737
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T19_23_47_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:23:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-193737-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:26:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:24:17 +0000   Tue, 01 Oct 2024 19:23:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:24:17 +0000   Tue, 01 Oct 2024 19:23:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:24:17 +0000   Tue, 01 Oct 2024 19:23:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:24:17 +0000   Tue, 01 Oct 2024 19:24:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.152
	  Hostname:    ha-193737-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef1097b5e0604ff19d7361f2921010b9
	  System UUID:                ef1097b5-e060-4ff1-9d73-61f2921010b9
	  Boot ID:                    e616be63-4a8a-41b8-a0fc-2b1d892a1200
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-h886q       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m1s
	  kube-system                 kube-proxy-hz2nn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m1s (x3 over 3m1s)  kubelet          Node ha-193737-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x3 over 3m1s)  kubelet          Node ha-193737-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x3 over 3m1s)  kubelet          Node ha-193737-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal  RegisteredNode           2m56s                node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal  RegisteredNode           2m56s                node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-193737-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 1 19:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050773] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037054] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.754509] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.921161] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Oct 1 19:20] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.804167] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.059657] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065329] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.157689] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.148971] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.256595] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +3.897654] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +5.026995] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.059544] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.061605] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.119912] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.150839] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.375138] kauditd_printk_skb: 38 callbacks suppressed
	[Oct 1 19:21] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e] <==
	{"level":"warn","ts":"2024-10-01T19:26:47.359467Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.373074Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.383211Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.388393Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.404065Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.404644Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.416223Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.424327Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.428521Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.432469Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.439940Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.449932Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.457105Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.458962Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.460904Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.464274Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.475322Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.484379Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.491013Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.495110Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.498539Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.502376Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.510359Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.518133Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:47.559560Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:26:47 up 6 min,  0 users,  load average: 0.38, 0.33, 0.18
	Linux ha-193737 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525] <==
	I1001 19:26:08.345209       1 main.go:299] handling current node
	I1001 19:26:18.353412       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:26:18.353544       1 main.go:299] handling current node
	I1001 19:26:18.353578       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:26:18.353599       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:26:18.353799       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I1001 19:26:18.356019       1 main.go:322] Node ha-193737-m03 has CIDR [10.244.2.0/24] 
	I1001 19:26:18.356213       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:26:18.356246       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:26:28.353932       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:26:28.354077       1 main.go:299] handling current node
	I1001 19:26:28.354108       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:26:28.354126       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:26:28.354260       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I1001 19:26:28.354312       1 main.go:322] Node ha-193737-m03 has CIDR [10.244.2.0/24] 
	I1001 19:26:28.354433       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:26:28.354480       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:26:38.345063       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:26:38.345186       1 main.go:299] handling current node
	I1001 19:26:38.345230       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:26:38.345253       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:26:38.345420       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I1001 19:26:38.345447       1 main.go:322] Node ha-193737-m03 has CIDR [10.244.2.0/24] 
	I1001 19:26:38.345532       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:26:38.345554       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [d2c57920320eb65f4477cbff8aec818a7ecddb3461a77ca111172e4d1a5f7e71] <==
	I1001 19:20:35.856444       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1001 19:20:35.965501       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1001 19:21:24.240949       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1001 19:21:24.240967       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 17.015µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1001 19:21:24.242740       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1001 19:21:24.244065       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1001 19:21:24.245377       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.686767ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1001 19:23:11.375797       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53914: use of closed network connection
	E1001 19:23:11.551258       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53928: use of closed network connection
	E1001 19:23:11.731362       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53936: use of closed network connection
	E1001 19:23:11.972041       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53954: use of closed network connection
	E1001 19:23:12.366625       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53984: use of closed network connection
	E1001 19:23:12.546073       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54012: use of closed network connection
	E1001 19:23:12.732610       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54022: use of closed network connection
	E1001 19:23:12.902151       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54038: use of closed network connection
	E1001 19:23:13.375286       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54102: use of closed network connection
	E1001 19:23:13.554664       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54126: use of closed network connection
	E1001 19:23:13.743236       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54138: use of closed network connection
	E1001 19:23:13.926913       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54164: use of closed network connection
	E1001 19:23:14.106331       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54176: use of closed network connection
	E1001 19:23:47.033544       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1001 19:23:47.034526       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 71.236µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1001 19:23:47.042011       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1001 19:23:47.046959       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1001 19:23:47.048673       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="15.259067ms" method="POST" path="/api/v1/namespaces/default/events" result=null
	
	
	==> kube-controller-manager [fc9d05172b801a89ec36c0d0d549bfeee1aafb67f6586e3f4f163cb63f01d062] <==
	I1001 19:23:46.953662       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-193737-m04\" does not exist"
	I1001 19:23:46.986878       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-193737-m04" podCIDRs=["10.244.3.0/24"]
	I1001 19:23:46.986941       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:46.987007       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:47.215804       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:47.592799       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:50.155095       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-193737-m04"
	I1001 19:23:50.259908       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:51.578375       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:51.680209       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:51.931826       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:52.014093       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:57.305544       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:24:06.597966       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:24:06.598358       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-193737-m04"
	I1001 19:24:06.614401       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:24:06.949883       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:24:17.699273       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:25:00.186561       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m02"
	I1001 19:25:00.186799       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-193737-m04"
	I1001 19:25:00.216973       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m02"
	I1001 19:25:00.303275       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="68.678995ms"
	I1001 19:25:00.303561       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="142.589µs"
	I1001 19:25:01.983529       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m02"
	I1001 19:25:05.453661       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m02"
	
	
	==> kube-proxy [6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 19:20:37.420079       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 19:20:37.442921       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.14"]
	E1001 19:20:37.443047       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 19:20:37.482251       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 19:20:37.482297       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 19:20:37.482322       1 server_linux.go:169] "Using iptables Proxier"
	I1001 19:20:37.485863       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 19:20:37.486623       1 server.go:483] "Version info" version="v1.31.1"
	I1001 19:20:37.486654       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:20:37.489107       1 config.go:199] "Starting service config controller"
	I1001 19:20:37.489328       1 config.go:105] "Starting endpoint slice config controller"
	I1001 19:20:37.489656       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 19:20:37.489772       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 19:20:37.491468       1 config.go:328] "Starting node config controller"
	I1001 19:20:37.491495       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 19:20:37.590528       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 19:20:37.590619       1 shared_informer.go:320] Caches are synced for service config
	I1001 19:20:37.591994       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7] <==
	E1001 19:20:29.084572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1001 19:20:30.974700       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1001 19:23:06.369501       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rbjkx\": pod busybox-7dff88458-rbjkx is already assigned to node \"ha-193737\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rbjkx" node="ha-193737"
	E1001 19:23:06.370091       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ba3ecbe1-fb88-4674-b679-a442b28cd68e(default/busybox-7dff88458-rbjkx) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-rbjkx"
	E1001 19:23:06.370388       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rbjkx\": pod busybox-7dff88458-rbjkx is already assigned to node \"ha-193737\"" pod="default/busybox-7dff88458-rbjkx"
	I1001 19:23:06.374870       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-rbjkx" node="ha-193737"
	E1001 19:23:06.474319       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-9k8vh is already present in the active queue" pod="default/busybox-7dff88458-9k8vh"
	E1001 19:23:06.510626       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-x4nmn is already present in the active queue" pod="default/busybox-7dff88458-x4nmn"
	E1001 19:23:47.032927       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-tfcsk\": pod kindnet-tfcsk is already assigned to node \"ha-193737-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-tfcsk" node="ha-193737-m04"
	E1001 19:23:47.033064       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-tfcsk\": pod kindnet-tfcsk is already assigned to node \"ha-193737-m04\"" pod="kube-system/kindnet-tfcsk"
	E1001 19:23:47.032927       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-hz2nn\": pod kube-proxy-hz2nn is already assigned to node \"ha-193737-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-hz2nn" node="ha-193737-m04"
	E1001 19:23:47.045815       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4f960179-106c-4201-b54b-eea8c5aea0dc(kube-system/kube-proxy-hz2nn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-hz2nn"
	E1001 19:23:47.046589       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-hz2nn\": pod kube-proxy-hz2nn is already assigned to node \"ha-193737-m04\"" pod="kube-system/kube-proxy-hz2nn"
	I1001 19:23:47.046769       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-hz2nn" node="ha-193737-m04"
	E1001 19:23:47.062993       1 schedule_one.go:953] "Scheduler cache AssumePod failed" err="pod 046c48a4-b41b-4a77-8949-aa553947416b(kube-system/kindnet-h886q) is in the cache, so can't be assumed" pod="kube-system/kindnet-h886q"
	E1001 19:23:47.065004       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="pod 046c48a4-b41b-4a77-8949-aa553947416b(kube-system/kindnet-h886q) is in the cache, so can't be assumed" pod="kube-system/kindnet-h886q"
	I1001 19:23:47.065109       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-h886q" node="ha-193737-m04"
	E1001 19:23:47.081592       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-z5qhk\": pod kube-proxy-z5qhk is already assigned to node \"ha-193737-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-z5qhk" node="ha-193737-m04"
	E1001 19:23:47.081864       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 785d6c85-2697-4f02-80a4-55483a0faa64(kube-system/kube-proxy-z5qhk) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-z5qhk"
	E1001 19:23:47.081920       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-z5qhk\": pod kube-proxy-z5qhk is already assigned to node \"ha-193737-m04\"" pod="kube-system/kube-proxy-z5qhk"
	I1001 19:23:47.083299       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-z5qhk" node="ha-193737-m04"
	E1001 19:23:47.138476       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4q2pc\": pod kindnet-4q2pc is already assigned to node \"ha-193737-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4q2pc" node="ha-193737-m04"
	E1001 19:23:47.138649       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f23b02a5-c64e-44c3-83b9-7192d19a6efc(kube-system/kindnet-4q2pc) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-4q2pc"
	E1001 19:23:47.138779       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4q2pc\": pod kindnet-4q2pc is already assigned to node \"ha-193737-m04\"" pod="kube-system/kindnet-4q2pc"
	I1001 19:23:47.138823       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-4q2pc" node="ha-193737-m04"
	
	
	==> kubelet <==
	Oct 01 19:25:31 ha-193737 kubelet[1313]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 19:25:31 ha-193737 kubelet[1313]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 19:25:31 ha-193737 kubelet[1313]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 19:25:31 ha-193737 kubelet[1313]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 19:25:31 ha-193737 kubelet[1313]: E1001 19:25:31.112855    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810731112438565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:25:31 ha-193737 kubelet[1313]: E1001 19:25:31.112899    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810731112438565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:25:41 ha-193737 kubelet[1313]: E1001 19:25:41.114457    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810741114104863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:25:41 ha-193737 kubelet[1313]: E1001 19:25:41.114791    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810741114104863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:25:51 ha-193737 kubelet[1313]: E1001 19:25:51.116278    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810751115811001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:25:51 ha-193737 kubelet[1313]: E1001 19:25:51.116653    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810751115811001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:01 ha-193737 kubelet[1313]: E1001 19:26:01.119303    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810761118827447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:01 ha-193737 kubelet[1313]: E1001 19:26:01.119351    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810761118827447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:11 ha-193737 kubelet[1313]: E1001 19:26:11.121360    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810771121035313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:11 ha-193737 kubelet[1313]: E1001 19:26:11.121412    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810771121035313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:21 ha-193737 kubelet[1313]: E1001 19:26:21.123512    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810781123120430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:21 ha-193737 kubelet[1313]: E1001 19:26:21.123938    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810781123120430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:31 ha-193737 kubelet[1313]: E1001 19:26:31.044582    1313 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 19:26:31 ha-193737 kubelet[1313]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 19:26:31 ha-193737 kubelet[1313]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 19:26:31 ha-193737 kubelet[1313]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 19:26:31 ha-193737 kubelet[1313]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 19:26:31 ha-193737 kubelet[1313]: E1001 19:26:31.126194    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810791125910385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:31 ha-193737 kubelet[1313]: E1001 19:26:31.126217    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810791125910385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:41 ha-193737 kubelet[1313]: E1001 19:26:41.128087    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810801127576002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:41 ha-193737 kubelet[1313]: E1001 19:26:41.128431    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810801127576002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-193737 -n ha-193737
helpers_test.go:261: (dbg) Run:  kubectl --context ha-193737 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.380460276s)
ha_test.go:415: expected profile "ha-193737" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-193737\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-193737\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-193737\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.14\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.27\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.101\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.152\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\
"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\
":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-193737 -n ha-193737
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-193737 logs -n 25: (1.367682207s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-193737 cp ha-193737-m03:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1122815483/001/cp-test_ha-193737-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m03:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737:/home/docker/cp-test_ha-193737-m03_ha-193737.txt                       |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737 sudo cat                                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m03_ha-193737.txt                                 |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m03:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m02:/home/docker/cp-test_ha-193737-m03_ha-193737-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737-m02 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m03_ha-193737-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m03:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04:/home/docker/cp-test_ha-193737-m03_ha-193737-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737-m04 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m03_ha-193737-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-193737 cp testdata/cp-test.txt                                                | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1122815483/001/cp-test_ha-193737-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737:/home/docker/cp-test_ha-193737-m04_ha-193737.txt                       |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737 sudo cat                                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m04_ha-193737.txt                                 |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m02:/home/docker/cp-test_ha-193737-m04_ha-193737-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737-m02 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m04_ha-193737-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03:/home/docker/cp-test_ha-193737-m04_ha-193737-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737-m03 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m04_ha-193737-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-193737 node stop m02 -v=7                                                     | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 19:19:47
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 19:19:47.806967   31154 out.go:345] Setting OutFile to fd 1 ...
	I1001 19:19:47.807072   31154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:19:47.807081   31154 out.go:358] Setting ErrFile to fd 2...
	I1001 19:19:47.807085   31154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:19:47.807300   31154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 19:19:47.807883   31154 out.go:352] Setting JSON to false
	I1001 19:19:47.808862   31154 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3730,"bootTime":1727806658,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 19:19:47.808959   31154 start.go:139] virtualization: kvm guest
	I1001 19:19:47.810915   31154 out.go:177] * [ha-193737] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 19:19:47.812033   31154 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 19:19:47.812047   31154 notify.go:220] Checking for updates...
	I1001 19:19:47.814140   31154 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 19:19:47.815207   31154 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:19:47.816467   31154 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:19:47.817736   31154 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 19:19:47.818886   31154 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 19:19:47.820159   31154 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 19:19:47.855456   31154 out.go:177] * Using the kvm2 driver based on user configuration
	I1001 19:19:47.856527   31154 start.go:297] selected driver: kvm2
	I1001 19:19:47.856547   31154 start.go:901] validating driver "kvm2" against <nil>
	I1001 19:19:47.856562   31154 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 19:19:47.857294   31154 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 19:19:47.857376   31154 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 19:19:47.872487   31154 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 19:19:47.872546   31154 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 19:19:47.872796   31154 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 19:19:47.872826   31154 cni.go:84] Creating CNI manager for ""
	I1001 19:19:47.872874   31154 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1001 19:19:47.872886   31154 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 19:19:47.872938   31154 start.go:340] cluster config:
	{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1001 19:19:47.873050   31154 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 19:19:47.874719   31154 out.go:177] * Starting "ha-193737" primary control-plane node in "ha-193737" cluster
	I1001 19:19:47.875804   31154 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:19:47.875840   31154 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 19:19:47.875850   31154 cache.go:56] Caching tarball of preloaded images
	I1001 19:19:47.875957   31154 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 19:19:47.875970   31154 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 19:19:47.876255   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:19:47.876273   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json: {Name:mk44677a1f0c01c3be022903d4a146ca8f437dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:19:47.876454   31154 start.go:360] acquireMachinesLock for ha-193737: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 19:19:47.876490   31154 start.go:364] duration metric: took 20.799µs to acquireMachinesLock for "ha-193737"
	I1001 19:19:47.876512   31154 start.go:93] Provisioning new machine with config: &{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:19:47.876581   31154 start.go:125] createHost starting for "" (driver="kvm2")
	I1001 19:19:47.878132   31154 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 19:19:47.878257   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:19:47.878301   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:19:47.892637   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32839
	I1001 19:19:47.893161   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:19:47.893766   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:19:47.893788   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:19:47.894083   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:19:47.894225   31154 main.go:141] libmachine: (ha-193737) Calling .GetMachineName
	I1001 19:19:47.894350   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:19:47.894482   31154 start.go:159] libmachine.API.Create for "ha-193737" (driver="kvm2")
	I1001 19:19:47.894506   31154 client.go:168] LocalClient.Create starting
	I1001 19:19:47.894539   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem
	I1001 19:19:47.894575   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:19:47.894607   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:19:47.894667   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem
	I1001 19:19:47.894686   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:19:47.894699   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:19:47.894713   31154 main.go:141] libmachine: Running pre-create checks...
	I1001 19:19:47.894730   31154 main.go:141] libmachine: (ha-193737) Calling .PreCreateCheck
	I1001 19:19:47.895057   31154 main.go:141] libmachine: (ha-193737) Calling .GetConfigRaw
	I1001 19:19:47.895392   31154 main.go:141] libmachine: Creating machine...
	I1001 19:19:47.895405   31154 main.go:141] libmachine: (ha-193737) Calling .Create
	I1001 19:19:47.895568   31154 main.go:141] libmachine: (ha-193737) Creating KVM machine...
	I1001 19:19:47.896749   31154 main.go:141] libmachine: (ha-193737) DBG | found existing default KVM network
	I1001 19:19:47.897409   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:47.897251   31177 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I1001 19:19:47.897459   31154 main.go:141] libmachine: (ha-193737) DBG | created network xml: 
	I1001 19:19:47.897477   31154 main.go:141] libmachine: (ha-193737) DBG | <network>
	I1001 19:19:47.897495   31154 main.go:141] libmachine: (ha-193737) DBG |   <name>mk-ha-193737</name>
	I1001 19:19:47.897509   31154 main.go:141] libmachine: (ha-193737) DBG |   <dns enable='no'/>
	I1001 19:19:47.897529   31154 main.go:141] libmachine: (ha-193737) DBG |   
	I1001 19:19:47.897549   31154 main.go:141] libmachine: (ha-193737) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1001 19:19:47.897562   31154 main.go:141] libmachine: (ha-193737) DBG |     <dhcp>
	I1001 19:19:47.897573   31154 main.go:141] libmachine: (ha-193737) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1001 19:19:47.897582   31154 main.go:141] libmachine: (ha-193737) DBG |     </dhcp>
	I1001 19:19:47.897589   31154 main.go:141] libmachine: (ha-193737) DBG |   </ip>
	I1001 19:19:47.897594   31154 main.go:141] libmachine: (ha-193737) DBG |   
	I1001 19:19:47.897599   31154 main.go:141] libmachine: (ha-193737) DBG | </network>
	I1001 19:19:47.897608   31154 main.go:141] libmachine: (ha-193737) DBG | 
	I1001 19:19:47.902355   31154 main.go:141] libmachine: (ha-193737) DBG | trying to create private KVM network mk-ha-193737 192.168.39.0/24...
	I1001 19:19:47.965826   31154 main.go:141] libmachine: (ha-193737) DBG | private KVM network mk-ha-193737 192.168.39.0/24 created
	I1001 19:19:47.965857   31154 main.go:141] libmachine: (ha-193737) Setting up store path in /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737 ...
	I1001 19:19:47.965875   31154 main.go:141] libmachine: (ha-193737) Building disk image from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 19:19:47.965943   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:47.965838   31177 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:19:47.966014   31154 main.go:141] libmachine: (ha-193737) Downloading /home/jenkins/minikube-integration/19736-11198/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 19:19:48.225463   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:48.225322   31177 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa...
	I1001 19:19:48.498755   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:48.498602   31177 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/ha-193737.rawdisk...
	I1001 19:19:48.498778   31154 main.go:141] libmachine: (ha-193737) DBG | Writing magic tar header
	I1001 19:19:48.498788   31154 main.go:141] libmachine: (ha-193737) DBG | Writing SSH key tar header
	I1001 19:19:48.498813   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:48.498738   31177 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737 ...
	I1001 19:19:48.498825   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737
	I1001 19:19:48.498844   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737 (perms=drwx------)
	I1001 19:19:48.498866   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines (perms=drwxr-xr-x)
	I1001 19:19:48.498875   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube (perms=drwxr-xr-x)
	I1001 19:19:48.498909   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines
	I1001 19:19:48.498961   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198 (perms=drwxrwxr-x)
	I1001 19:19:48.498975   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:19:48.498992   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198
	I1001 19:19:48.499012   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 19:19:48.499035   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 19:19:48.499048   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 19:19:48.499056   31154 main.go:141] libmachine: (ha-193737) Creating domain...
	I1001 19:19:48.499066   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins
	I1001 19:19:48.499074   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home
	I1001 19:19:48.499095   31154 main.go:141] libmachine: (ha-193737) DBG | Skipping /home - not owner
	I1001 19:19:48.500091   31154 main.go:141] libmachine: (ha-193737) define libvirt domain using xml: 
	I1001 19:19:48.500110   31154 main.go:141] libmachine: (ha-193737) <domain type='kvm'>
	I1001 19:19:48.500119   31154 main.go:141] libmachine: (ha-193737)   <name>ha-193737</name>
	I1001 19:19:48.500128   31154 main.go:141] libmachine: (ha-193737)   <memory unit='MiB'>2200</memory>
	I1001 19:19:48.500140   31154 main.go:141] libmachine: (ha-193737)   <vcpu>2</vcpu>
	I1001 19:19:48.500149   31154 main.go:141] libmachine: (ha-193737)   <features>
	I1001 19:19:48.500155   31154 main.go:141] libmachine: (ha-193737)     <acpi/>
	I1001 19:19:48.500161   31154 main.go:141] libmachine: (ha-193737)     <apic/>
	I1001 19:19:48.500166   31154 main.go:141] libmachine: (ha-193737)     <pae/>
	I1001 19:19:48.500178   31154 main.go:141] libmachine: (ha-193737)     
	I1001 19:19:48.500186   31154 main.go:141] libmachine: (ha-193737)   </features>
	I1001 19:19:48.500190   31154 main.go:141] libmachine: (ha-193737)   <cpu mode='host-passthrough'>
	I1001 19:19:48.500271   31154 main.go:141] libmachine: (ha-193737)   
	I1001 19:19:48.500322   31154 main.go:141] libmachine: (ha-193737)   </cpu>
	I1001 19:19:48.500344   31154 main.go:141] libmachine: (ha-193737)   <os>
	I1001 19:19:48.500376   31154 main.go:141] libmachine: (ha-193737)     <type>hvm</type>
	I1001 19:19:48.500385   31154 main.go:141] libmachine: (ha-193737)     <boot dev='cdrom'/>
	I1001 19:19:48.500394   31154 main.go:141] libmachine: (ha-193737)     <boot dev='hd'/>
	I1001 19:19:48.500402   31154 main.go:141] libmachine: (ha-193737)     <bootmenu enable='no'/>
	I1001 19:19:48.500407   31154 main.go:141] libmachine: (ha-193737)   </os>
	I1001 19:19:48.500422   31154 main.go:141] libmachine: (ha-193737)   <devices>
	I1001 19:19:48.500428   31154 main.go:141] libmachine: (ha-193737)     <disk type='file' device='cdrom'>
	I1001 19:19:48.500438   31154 main.go:141] libmachine: (ha-193737)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/boot2docker.iso'/>
	I1001 19:19:48.500448   31154 main.go:141] libmachine: (ha-193737)       <target dev='hdc' bus='scsi'/>
	I1001 19:19:48.500454   31154 main.go:141] libmachine: (ha-193737)       <readonly/>
	I1001 19:19:48.500461   31154 main.go:141] libmachine: (ha-193737)     </disk>
	I1001 19:19:48.500475   31154 main.go:141] libmachine: (ha-193737)     <disk type='file' device='disk'>
	I1001 19:19:48.500485   31154 main.go:141] libmachine: (ha-193737)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 19:19:48.500507   31154 main.go:141] libmachine: (ha-193737)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/ha-193737.rawdisk'/>
	I1001 19:19:48.500514   31154 main.go:141] libmachine: (ha-193737)       <target dev='hda' bus='virtio'/>
	I1001 19:19:48.500519   31154 main.go:141] libmachine: (ha-193737)     </disk>
	I1001 19:19:48.500525   31154 main.go:141] libmachine: (ha-193737)     <interface type='network'>
	I1001 19:19:48.500530   31154 main.go:141] libmachine: (ha-193737)       <source network='mk-ha-193737'/>
	I1001 19:19:48.500536   31154 main.go:141] libmachine: (ha-193737)       <model type='virtio'/>
	I1001 19:19:48.500541   31154 main.go:141] libmachine: (ha-193737)     </interface>
	I1001 19:19:48.500547   31154 main.go:141] libmachine: (ha-193737)     <interface type='network'>
	I1001 19:19:48.500552   31154 main.go:141] libmachine: (ha-193737)       <source network='default'/>
	I1001 19:19:48.500558   31154 main.go:141] libmachine: (ha-193737)       <model type='virtio'/>
	I1001 19:19:48.500570   31154 main.go:141] libmachine: (ha-193737)     </interface>
	I1001 19:19:48.500593   31154 main.go:141] libmachine: (ha-193737)     <serial type='pty'>
	I1001 19:19:48.500606   31154 main.go:141] libmachine: (ha-193737)       <target port='0'/>
	I1001 19:19:48.500616   31154 main.go:141] libmachine: (ha-193737)     </serial>
	I1001 19:19:48.500621   31154 main.go:141] libmachine: (ha-193737)     <console type='pty'>
	I1001 19:19:48.500632   31154 main.go:141] libmachine: (ha-193737)       <target type='serial' port='0'/>
	I1001 19:19:48.500644   31154 main.go:141] libmachine: (ha-193737)     </console>
	I1001 19:19:48.500651   31154 main.go:141] libmachine: (ha-193737)     <rng model='virtio'>
	I1001 19:19:48.500662   31154 main.go:141] libmachine: (ha-193737)       <backend model='random'>/dev/random</backend>
	I1001 19:19:48.500669   31154 main.go:141] libmachine: (ha-193737)     </rng>
	I1001 19:19:48.500674   31154 main.go:141] libmachine: (ha-193737)     
	I1001 19:19:48.500681   31154 main.go:141] libmachine: (ha-193737)     
	I1001 19:19:48.500687   31154 main.go:141] libmachine: (ha-193737)   </devices>
	I1001 19:19:48.500693   31154 main.go:141] libmachine: (ha-193737) </domain>
	I1001 19:19:48.500703   31154 main.go:141] libmachine: (ha-193737) 
	I1001 19:19:48.505062   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:e8:37:5d in network default
	I1001 19:19:48.505636   31154 main.go:141] libmachine: (ha-193737) Ensuring networks are active...
	I1001 19:19:48.505675   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:48.506541   31154 main.go:141] libmachine: (ha-193737) Ensuring network default is active
	I1001 19:19:48.506813   31154 main.go:141] libmachine: (ha-193737) Ensuring network mk-ha-193737 is active
	I1001 19:19:48.507255   31154 main.go:141] libmachine: (ha-193737) Getting domain xml...
	I1001 19:19:48.507904   31154 main.go:141] libmachine: (ha-193737) Creating domain...
	I1001 19:19:49.716659   31154 main.go:141] libmachine: (ha-193737) Waiting to get IP...
	I1001 19:19:49.717406   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:49.717831   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:49.717883   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:49.717825   31177 retry.go:31] will retry after 192.827447ms: waiting for machine to come up
	I1001 19:19:49.912407   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:49.912907   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:49.912957   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:49.912879   31177 retry.go:31] will retry after 258.269769ms: waiting for machine to come up
	I1001 19:19:50.172507   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:50.173033   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:50.173054   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:50.172948   31177 retry.go:31] will retry after 373.637188ms: waiting for machine to come up
	I1001 19:19:50.548615   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:50.549181   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:50.549210   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:50.549112   31177 retry.go:31] will retry after 430.626472ms: waiting for machine to come up
	I1001 19:19:50.981709   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:50.982164   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:50.982197   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:50.982117   31177 retry.go:31] will retry after 529.86174ms: waiting for machine to come up
	I1001 19:19:51.513872   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:51.514354   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:51.514379   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:51.514310   31177 retry.go:31] will retry after 925.92584ms: waiting for machine to come up
	I1001 19:19:52.441513   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:52.442015   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:52.442079   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:52.441913   31177 retry.go:31] will retry after 1.034076263s: waiting for machine to come up
	I1001 19:19:53.477995   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:53.478427   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:53.478449   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:53.478392   31177 retry.go:31] will retry after 1.13194403s: waiting for machine to come up
	I1001 19:19:54.612551   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:54.613118   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:54.613140   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:54.613054   31177 retry.go:31] will retry after 1.647034063s: waiting for machine to come up
	I1001 19:19:56.262733   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:56.263161   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:56.263186   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:56.263102   31177 retry.go:31] will retry after 1.500997099s: waiting for machine to come up
	I1001 19:19:57.765863   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:57.766323   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:57.766356   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:57.766274   31177 retry.go:31] will retry after 2.455749683s: waiting for machine to come up
	I1001 19:20:00.223334   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:00.223743   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:20:00.223759   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:20:00.223705   31177 retry.go:31] will retry after 2.437856543s: waiting for machine to come up
	I1001 19:20:02.664433   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:02.664809   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:20:02.664832   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:20:02.664763   31177 retry.go:31] will retry after 3.902681899s: waiting for machine to come up
	I1001 19:20:06.571440   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:06.571775   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:20:06.571797   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:20:06.571730   31177 retry.go:31] will retry after 5.423043301s: waiting for machine to come up
	I1001 19:20:11.999360   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:11.999779   31154 main.go:141] libmachine: (ha-193737) Found IP for machine: 192.168.39.14
	I1001 19:20:11.999815   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has current primary IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:11.999824   31154 main.go:141] libmachine: (ha-193737) Reserving static IP address...
	I1001 19:20:12.000199   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find host DHCP lease matching {name: "ha-193737", mac: "52:54:00:80:2b:09", ip: "192.168.39.14"} in network mk-ha-193737
	I1001 19:20:12.077653   31154 main.go:141] libmachine: (ha-193737) Reserved static IP address: 192.168.39.14
	I1001 19:20:12.077732   31154 main.go:141] libmachine: (ha-193737) DBG | Getting to WaitForSSH function...
	I1001 19:20:12.077743   31154 main.go:141] libmachine: (ha-193737) Waiting for SSH to be available...
	I1001 19:20:12.080321   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.080865   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:minikube Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.080898   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.081006   31154 main.go:141] libmachine: (ha-193737) DBG | Using SSH client type: external
	I1001 19:20:12.081047   31154 main.go:141] libmachine: (ha-193737) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa (-rw-------)
	I1001 19:20:12.081075   31154 main.go:141] libmachine: (ha-193737) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 19:20:12.081085   31154 main.go:141] libmachine: (ha-193737) DBG | About to run SSH command:
	I1001 19:20:12.081096   31154 main.go:141] libmachine: (ha-193737) DBG | exit 0
	I1001 19:20:12.208487   31154 main.go:141] libmachine: (ha-193737) DBG | SSH cmd err, output: <nil>: 
	I1001 19:20:12.208725   31154 main.go:141] libmachine: (ha-193737) KVM machine creation complete!
	I1001 19:20:12.209102   31154 main.go:141] libmachine: (ha-193737) Calling .GetConfigRaw
	I1001 19:20:12.209646   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:12.209809   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:12.209935   31154 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 19:20:12.209949   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:20:12.211166   31154 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 19:20:12.211190   31154 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 19:20:12.211195   31154 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 19:20:12.211201   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.213529   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.213857   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.213883   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.213972   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:12.214116   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.214264   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.214394   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:12.214556   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:12.214781   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:12.214795   31154 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 19:20:12.319892   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:20:12.319913   31154 main.go:141] libmachine: Detecting the provisioner...
	I1001 19:20:12.319921   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.322718   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.323165   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.323192   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.323331   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:12.323522   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.323695   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.323840   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:12.324072   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:12.324284   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:12.324296   31154 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 19:20:12.429264   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 19:20:12.429335   31154 main.go:141] libmachine: found compatible host: buildroot
	I1001 19:20:12.429344   31154 main.go:141] libmachine: Provisioning with buildroot...
	I1001 19:20:12.429358   31154 main.go:141] libmachine: (ha-193737) Calling .GetMachineName
	I1001 19:20:12.429572   31154 buildroot.go:166] provisioning hostname "ha-193737"
	I1001 19:20:12.429594   31154 main.go:141] libmachine: (ha-193737) Calling .GetMachineName
	I1001 19:20:12.429736   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.432551   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.432897   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.432926   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.433127   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:12.433317   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.433512   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.433661   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:12.433801   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:12.433993   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:12.434007   31154 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-193737 && echo "ha-193737" | sudo tee /etc/hostname
	I1001 19:20:12.557230   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-193737
	
	I1001 19:20:12.557264   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.560034   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.560377   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.560404   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.560580   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:12.560736   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.560897   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.561023   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:12.561173   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:12.561344   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:12.561360   31154 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-193737' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-193737/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-193737' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 19:20:12.673716   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:20:12.673759   31154 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 19:20:12.673797   31154 buildroot.go:174] setting up certificates
	I1001 19:20:12.673811   31154 provision.go:84] configureAuth start
	I1001 19:20:12.673825   31154 main.go:141] libmachine: (ha-193737) Calling .GetMachineName
	I1001 19:20:12.674136   31154 main.go:141] libmachine: (ha-193737) Calling .GetIP
	I1001 19:20:12.676892   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.677280   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.677321   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.677483   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.679978   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.680305   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.680326   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.680487   31154 provision.go:143] copyHostCerts
	I1001 19:20:12.680516   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:20:12.680561   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 19:20:12.680573   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:20:12.680654   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 19:20:12.680751   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:20:12.680775   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 19:20:12.680787   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:20:12.680824   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 19:20:12.680885   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:20:12.680909   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 19:20:12.680917   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:20:12.680951   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 19:20:12.681013   31154 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.ha-193737 san=[127.0.0.1 192.168.39.14 ha-193737 localhost minikube]
	I1001 19:20:12.842484   31154 provision.go:177] copyRemoteCerts
	I1001 19:20:12.842574   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 19:20:12.842621   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.845898   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.846287   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.846310   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.846561   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:12.846731   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.846941   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:12.847077   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:12.930698   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 19:20:12.930795   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 19:20:12.955852   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 19:20:12.955930   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1001 19:20:12.979656   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 19:20:12.979722   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 19:20:13.003473   31154 provision.go:87] duration metric: took 329.649424ms to configureAuth
	I1001 19:20:13.003500   31154 buildroot.go:189] setting minikube options for container-runtime
	I1001 19:20:13.003695   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:20:13.003768   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:13.006651   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.006965   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.006994   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.007204   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:13.007396   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.007569   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.007765   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:13.007963   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:13.008170   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:13.008194   31154 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 19:20:13.223895   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 19:20:13.223928   31154 main.go:141] libmachine: Checking connection to Docker...
	I1001 19:20:13.223938   31154 main.go:141] libmachine: (ha-193737) Calling .GetURL
	I1001 19:20:13.225295   31154 main.go:141] libmachine: (ha-193737) DBG | Using libvirt version 6000000
	I1001 19:20:13.227525   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.227866   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.227899   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.227999   31154 main.go:141] libmachine: Docker is up and running!
	I1001 19:20:13.228014   31154 main.go:141] libmachine: Reticulating splines...
	I1001 19:20:13.228022   31154 client.go:171] duration metric: took 25.333507515s to LocalClient.Create
	I1001 19:20:13.228041   31154 start.go:167] duration metric: took 25.333560566s to libmachine.API.Create "ha-193737"
	I1001 19:20:13.228050   31154 start.go:293] postStartSetup for "ha-193737" (driver="kvm2")
	I1001 19:20:13.228060   31154 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 19:20:13.228083   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:13.228317   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 19:20:13.228339   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:13.230391   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.230709   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.230732   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.230837   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:13.230988   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.231120   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:13.231290   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:13.314353   31154 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 19:20:13.318432   31154 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 19:20:13.318458   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 19:20:13.318541   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 19:20:13.318638   31154 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 19:20:13.318652   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /etc/ssl/certs/184302.pem
	I1001 19:20:13.318780   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 19:20:13.328571   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:20:13.353035   31154 start.go:296] duration metric: took 124.970717ms for postStartSetup
	I1001 19:20:13.353110   31154 main.go:141] libmachine: (ha-193737) Calling .GetConfigRaw
	I1001 19:20:13.353736   31154 main.go:141] libmachine: (ha-193737) Calling .GetIP
	I1001 19:20:13.356423   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.356817   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.356852   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.357086   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:20:13.357278   31154 start.go:128] duration metric: took 25.480687424s to createHost
	I1001 19:20:13.357297   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:13.359783   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.360160   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.360189   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.360384   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:13.360591   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.360774   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.360932   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:13.361105   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:13.361274   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:13.361289   31154 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 19:20:13.464991   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727810413.446268696
	
	I1001 19:20:13.465023   31154 fix.go:216] guest clock: 1727810413.446268696
	I1001 19:20:13.465037   31154 fix.go:229] Guest: 2024-10-01 19:20:13.446268696 +0000 UTC Remote: 2024-10-01 19:20:13.35728811 +0000 UTC m=+25.585126920 (delta=88.980586ms)
	I1001 19:20:13.465070   31154 fix.go:200] guest clock delta is within tolerance: 88.980586ms
	I1001 19:20:13.465076   31154 start.go:83] releasing machines lock for "ha-193737", held for 25.588575039s
	I1001 19:20:13.465101   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:13.465340   31154 main.go:141] libmachine: (ha-193737) Calling .GetIP
	I1001 19:20:13.468083   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.468419   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.468447   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.468613   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:13.469143   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:13.469301   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:13.469362   31154 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 19:20:13.469413   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:13.469528   31154 ssh_runner.go:195] Run: cat /version.json
	I1001 19:20:13.469548   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:13.471980   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.472049   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.472309   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.472339   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.472393   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.472414   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.472482   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:13.472622   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:13.472666   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.472784   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.472833   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:13.472925   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:13.472991   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:13.473062   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:13.597462   31154 ssh_runner.go:195] Run: systemctl --version
	I1001 19:20:13.603452   31154 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 19:20:13.764276   31154 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 19:20:13.770676   31154 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 19:20:13.770753   31154 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 19:20:13.785990   31154 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 19:20:13.786018   31154 start.go:495] detecting cgroup driver to use...
	I1001 19:20:13.786088   31154 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 19:20:13.802042   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 19:20:13.815442   31154 docker.go:217] disabling cri-docker service (if available) ...
	I1001 19:20:13.815514   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 19:20:13.829012   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 19:20:13.842769   31154 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 19:20:13.956694   31154 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 19:20:14.102874   31154 docker.go:233] disabling docker service ...
	I1001 19:20:14.102940   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 19:20:14.117261   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 19:20:14.129985   31154 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 19:20:14.273597   31154 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 19:20:14.384529   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 19:20:14.397753   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 19:20:14.415792   31154 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 19:20:14.415850   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.426007   31154 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 19:20:14.426087   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.436393   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.446247   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.456029   31154 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 19:20:14.466078   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.475781   31154 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.492551   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.502706   31154 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 19:20:14.512290   31154 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 19:20:14.512379   31154 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 19:20:14.525913   31154 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 19:20:14.535543   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:20:14.653960   31154 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 19:20:14.741173   31154 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 19:20:14.741263   31154 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 19:20:14.745800   31154 start.go:563] Will wait 60s for crictl version
	I1001 19:20:14.745869   31154 ssh_runner.go:195] Run: which crictl
	I1001 19:20:14.749449   31154 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 19:20:14.789074   31154 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 19:20:14.789159   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:20:14.820545   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:20:14.849920   31154 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 19:20:14.850894   31154 main.go:141] libmachine: (ha-193737) Calling .GetIP
	I1001 19:20:14.853389   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:14.853698   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:14.853724   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:14.853935   31154 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 19:20:14.857967   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:20:14.870673   31154 kubeadm.go:883] updating cluster {Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 19:20:14.870794   31154 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:20:14.870846   31154 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 19:20:14.901722   31154 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1001 19:20:14.901791   31154 ssh_runner.go:195] Run: which lz4
	I1001 19:20:14.905716   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1001 19:20:14.905869   31154 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 19:20:14.909954   31154 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 19:20:14.909985   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1001 19:20:16.176019   31154 crio.go:462] duration metric: took 1.270229445s to copy over tarball
	I1001 19:20:16.176091   31154 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 19:20:18.196924   31154 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.020807915s)
	I1001 19:20:18.196955   31154 crio.go:469] duration metric: took 2.020904101s to extract the tarball
	I1001 19:20:18.196963   31154 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 19:20:18.232395   31154 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 19:20:18.277292   31154 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 19:20:18.277310   31154 cache_images.go:84] Images are preloaded, skipping loading
	I1001 19:20:18.277317   31154 kubeadm.go:934] updating node { 192.168.39.14 8443 v1.31.1 crio true true} ...
	I1001 19:20:18.277404   31154 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-193737 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 19:20:18.277469   31154 ssh_runner.go:195] Run: crio config
	I1001 19:20:18.320909   31154 cni.go:84] Creating CNI manager for ""
	I1001 19:20:18.320940   31154 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1001 19:20:18.320955   31154 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 19:20:18.320983   31154 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.14 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-193737 NodeName:ha-193737 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 19:20:18.321130   31154 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-193737"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 19:20:18.321154   31154 kube-vip.go:115] generating kube-vip config ...
	I1001 19:20:18.321192   31154 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 19:20:18.337979   31154 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 19:20:18.338099   31154 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1001 19:20:18.338161   31154 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 19:20:18.347788   31154 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 19:20:18.347864   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1001 19:20:18.356907   31154 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1001 19:20:18.372922   31154 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 19:20:18.388904   31154 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1001 19:20:18.404938   31154 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1001 19:20:18.421257   31154 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 19:20:18.425122   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:20:18.436829   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:20:18.545073   31154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:20:18.560862   31154 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737 for IP: 192.168.39.14
	I1001 19:20:18.560887   31154 certs.go:194] generating shared ca certs ...
	I1001 19:20:18.560910   31154 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:18.561104   31154 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 19:20:18.561167   31154 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 19:20:18.561182   31154 certs.go:256] generating profile certs ...
	I1001 19:20:18.561249   31154 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key
	I1001 19:20:18.561277   31154 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt with IP's: []
	I1001 19:20:19.147252   31154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt ...
	I1001 19:20:19.147288   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt: {Name:mk6cc12194e2b1b488446b45fb57531c12b19cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.147481   31154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key ...
	I1001 19:20:19.147500   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key: {Name:mk1f7ee6c9ea6b8fcc952a031324588416a57469 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.147599   31154 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.c3487d2e
	I1001 19:20:19.147622   31154 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.c3487d2e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.14 192.168.39.254]
	I1001 19:20:19.274032   31154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.c3487d2e ...
	I1001 19:20:19.274061   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.c3487d2e: {Name:mk19f3cf4cd1f2fca54e40738408be6aa73421ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.274224   31154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.c3487d2e ...
	I1001 19:20:19.274242   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.c3487d2e: {Name:mk2ba24a36a70c8a6e47019bdcda573a26500b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.274335   31154 certs.go:381] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.c3487d2e -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt
	I1001 19:20:19.274441   31154 certs.go:385] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.c3487d2e -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key
	I1001 19:20:19.274522   31154 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key
	I1001 19:20:19.274541   31154 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt with IP's: []
	I1001 19:20:19.432987   31154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt ...
	I1001 19:20:19.433018   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt: {Name:mkaa29f743f43e700e39d0141b3a793971db9bab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.433198   31154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key ...
	I1001 19:20:19.433215   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key: {Name:mkda8f4e7f39ac52933dd1a3f0855317051465de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.433333   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 19:20:19.433358   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 19:20:19.433374   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 19:20:19.433394   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 19:20:19.433411   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 19:20:19.433428   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 19:20:19.433441   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 19:20:19.433457   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 19:20:19.433541   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 19:20:19.433593   31154 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 19:20:19.433606   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 19:20:19.433643   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 19:20:19.433673   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 19:20:19.433703   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 19:20:19.433758   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:20:19.433792   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem -> /usr/share/ca-certificates/18430.pem
	I1001 19:20:19.433812   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /usr/share/ca-certificates/184302.pem
	I1001 19:20:19.433830   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:20:19.434414   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 19:20:19.462971   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 19:20:19.486817   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 19:20:19.510214   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 19:20:19.536715   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1001 19:20:19.562219   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 19:20:19.587563   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 19:20:19.611975   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 19:20:19.635789   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 19:20:19.660541   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 19:20:19.686922   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 19:20:19.713247   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 19:20:19.737109   31154 ssh_runner.go:195] Run: openssl version
	I1001 19:20:19.743466   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 19:20:19.755116   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 19:20:19.760240   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 19:20:19.760326   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 19:20:19.767474   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 19:20:19.779182   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 19:20:19.790431   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 19:20:19.795533   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 19:20:19.795593   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 19:20:19.801533   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 19:20:19.812537   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 19:20:19.823577   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:20:19.828798   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:20:19.828870   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:20:19.835152   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 19:20:19.846376   31154 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 19:20:19.850628   31154 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 19:20:19.850680   31154 kubeadm.go:392] StartCluster: {Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:20:19.850761   31154 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 19:20:19.850812   31154 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 19:20:19.892830   31154 cri.go:89] found id: ""
	I1001 19:20:19.892895   31154 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 19:20:19.902960   31154 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 19:20:19.913367   31154 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 19:20:19.923292   31154 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 19:20:19.923330   31154 kubeadm.go:157] found existing configuration files:
	
	I1001 19:20:19.923388   31154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 19:20:19.932878   31154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 19:20:19.932945   31154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 19:20:19.943333   31154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 19:20:19.952676   31154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 19:20:19.952738   31154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 19:20:19.962992   31154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 19:20:19.972649   31154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 19:20:19.972735   31154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 19:20:19.982834   31154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 19:20:19.993409   31154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 19:20:19.993469   31154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 19:20:20.002988   31154 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 19:20:20.127435   31154 kubeadm.go:310] W1001 19:20:20.114172     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 19:20:20.128326   31154 kubeadm.go:310] W1001 19:20:20.115365     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 19:20:20.262781   31154 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 19:20:31.543814   31154 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 19:20:31.543907   31154 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 19:20:31.543995   31154 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 19:20:31.544073   31154 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 19:20:31.544148   31154 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 19:20:31.544203   31154 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 19:20:31.545532   31154 out.go:235]   - Generating certificates and keys ...
	I1001 19:20:31.545611   31154 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 19:20:31.545691   31154 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 19:20:31.545778   31154 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 19:20:31.545854   31154 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 19:20:31.545932   31154 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 19:20:31.546012   31154 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 19:20:31.546085   31154 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 19:20:31.546175   31154 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-193737 localhost] and IPs [192.168.39.14 127.0.0.1 ::1]
	I1001 19:20:31.546218   31154 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 19:20:31.546369   31154 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-193737 localhost] and IPs [192.168.39.14 127.0.0.1 ::1]
	I1001 19:20:31.546436   31154 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 19:20:31.546488   31154 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 19:20:31.546527   31154 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 19:20:31.546577   31154 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 19:20:31.546623   31154 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 19:20:31.546668   31154 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 19:20:31.546722   31154 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 19:20:31.546817   31154 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 19:20:31.546863   31154 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 19:20:31.546932   31154 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 19:20:31.547004   31154 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 19:20:31.549095   31154 out.go:235]   - Booting up control plane ...
	I1001 19:20:31.549193   31154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 19:20:31.549275   31154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 19:20:31.549365   31154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 19:20:31.549456   31154 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 19:20:31.549553   31154 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 19:20:31.549596   31154 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 19:20:31.549707   31154 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 19:20:31.549790   31154 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 19:20:31.549840   31154 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.357694ms
	I1001 19:20:31.549900   31154 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 19:20:31.549947   31154 kubeadm.go:310] [api-check] The API server is healthy after 6.04683454s
	I1001 19:20:31.550033   31154 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 19:20:31.550189   31154 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 19:20:31.550277   31154 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 19:20:31.550430   31154 kubeadm.go:310] [mark-control-plane] Marking the node ha-193737 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 19:20:31.550487   31154 kubeadm.go:310] [bootstrap-token] Using token: 7by4e8.7cs25dkxb8txjdft
	I1001 19:20:31.551753   31154 out.go:235]   - Configuring RBAC rules ...
	I1001 19:20:31.551859   31154 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 19:20:31.551994   31154 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 19:20:31.552131   31154 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 19:20:31.552254   31154 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 19:20:31.552369   31154 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 19:20:31.552467   31154 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 19:20:31.552576   31154 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 19:20:31.552620   31154 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 19:20:31.552661   31154 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 19:20:31.552670   31154 kubeadm.go:310] 
	I1001 19:20:31.552724   31154 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 19:20:31.552736   31154 kubeadm.go:310] 
	I1001 19:20:31.552812   31154 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 19:20:31.552820   31154 kubeadm.go:310] 
	I1001 19:20:31.552841   31154 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 19:20:31.552936   31154 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 19:20:31.553000   31154 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 19:20:31.553018   31154 kubeadm.go:310] 
	I1001 19:20:31.553076   31154 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 19:20:31.553082   31154 kubeadm.go:310] 
	I1001 19:20:31.553119   31154 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 19:20:31.553125   31154 kubeadm.go:310] 
	I1001 19:20:31.553165   31154 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 19:20:31.553231   31154 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 19:20:31.553309   31154 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 19:20:31.553319   31154 kubeadm.go:310] 
	I1001 19:20:31.553382   31154 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 19:20:31.553446   31154 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 19:20:31.553452   31154 kubeadm.go:310] 
	I1001 19:20:31.553515   31154 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7by4e8.7cs25dkxb8txjdft \
	I1001 19:20:31.553595   31154 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 \
	I1001 19:20:31.553612   31154 kubeadm.go:310] 	--control-plane 
	I1001 19:20:31.553616   31154 kubeadm.go:310] 
	I1001 19:20:31.553679   31154 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 19:20:31.553686   31154 kubeadm.go:310] 
	I1001 19:20:31.553757   31154 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7by4e8.7cs25dkxb8txjdft \
	I1001 19:20:31.553878   31154 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 
	I1001 19:20:31.553899   31154 cni.go:84] Creating CNI manager for ""
	I1001 19:20:31.553906   31154 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1001 19:20:31.555354   31154 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1001 19:20:31.556734   31154 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1001 19:20:31.562528   31154 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1001 19:20:31.562546   31154 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1001 19:20:31.584306   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1001 19:20:31.963746   31154 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 19:20:31.963826   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:31.963839   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-193737 minikube.k8s.io/updated_at=2024_10_01T19_20_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=ha-193737 minikube.k8s.io/primary=true
	I1001 19:20:32.001753   31154 ops.go:34] apiserver oom_adj: -16
	I1001 19:20:32.132202   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:32.632805   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:33.133195   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:33.633216   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:34.132915   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:34.632316   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:35.132491   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:35.632537   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:36.132620   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:36.218756   31154 kubeadm.go:1113] duration metric: took 4.255002576s to wait for elevateKubeSystemPrivileges
	I1001 19:20:36.218788   31154 kubeadm.go:394] duration metric: took 16.368111595s to StartCluster
	I1001 19:20:36.218804   31154 settings.go:142] acquiring lock: {Name:mkeb3fe64ed992373f048aef24eb0d675bcab60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:36.218873   31154 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:20:36.219494   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/kubeconfig: {Name:mk7ef19d546928001abf2478a3abe1d17765a591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:36.219713   31154 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:20:36.219727   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 19:20:36.219734   31154 start.go:241] waiting for startup goroutines ...
	I1001 19:20:36.219741   31154 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 19:20:36.219834   31154 addons.go:69] Setting storage-provisioner=true in profile "ha-193737"
	I1001 19:20:36.219856   31154 addons.go:234] Setting addon storage-provisioner=true in "ha-193737"
	I1001 19:20:36.219869   31154 addons.go:69] Setting default-storageclass=true in profile "ha-193737"
	I1001 19:20:36.219886   31154 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-193737"
	I1001 19:20:36.219893   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:20:36.219970   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:20:36.220394   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:36.220428   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:36.220398   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:36.220520   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:36.237915   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41109
	I1001 19:20:36.238065   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40357
	I1001 19:20:36.238375   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:36.238551   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:36.238872   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:36.238891   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:36.239076   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:36.239108   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:36.239214   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:36.239454   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:36.239611   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:20:36.239781   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:36.239809   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:36.241737   31154 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:20:36.241972   31154 kapi.go:59] client config for ha-193737: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key", CAFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 19:20:36.242414   31154 cert_rotation.go:140] Starting client certificate rotation controller
	I1001 19:20:36.242541   31154 addons.go:234] Setting addon default-storageclass=true in "ha-193737"
	I1001 19:20:36.242580   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:20:36.242883   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:36.242931   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:36.258780   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39715
	I1001 19:20:36.259292   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:36.259824   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:36.259850   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:36.260262   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:36.260587   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:20:36.262369   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37495
	I1001 19:20:36.262435   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:36.263083   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:36.263600   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:36.263628   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:36.264019   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:36.264582   31154 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 19:20:36.264749   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:36.264788   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:36.265963   31154 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 19:20:36.265987   31154 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 19:20:36.266008   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:36.270544   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:36.271199   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:36.271222   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:36.271425   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:36.271642   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:36.271818   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:36.272058   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:36.283812   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42859
	I1001 19:20:36.284387   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:36.284896   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:36.284913   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:36.285508   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:36.285834   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:20:36.288106   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:36.288393   31154 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 19:20:36.288414   31154 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 19:20:36.288437   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:36.291938   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:36.292436   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:36.292463   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:36.292681   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:36.292858   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:36.293020   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:36.293164   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:36.379914   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 19:20:36.401549   31154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 19:20:36.450371   31154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 19:20:36.756603   31154 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1001 19:20:37.190467   31154 main.go:141] libmachine: Making call to close driver server
	I1001 19:20:37.190501   31154 main.go:141] libmachine: (ha-193737) Calling .Close
	I1001 19:20:37.190537   31154 main.go:141] libmachine: Making call to close driver server
	I1001 19:20:37.190556   31154 main.go:141] libmachine: (ha-193737) Calling .Close
	I1001 19:20:37.190812   31154 main.go:141] libmachine: Successfully made call to close driver server
	I1001 19:20:37.190821   31154 main.go:141] libmachine: Successfully made call to close driver server
	I1001 19:20:37.190830   31154 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 19:20:37.190833   31154 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 19:20:37.190839   31154 main.go:141] libmachine: Making call to close driver server
	I1001 19:20:37.190841   31154 main.go:141] libmachine: Making call to close driver server
	I1001 19:20:37.190847   31154 main.go:141] libmachine: (ha-193737) Calling .Close
	I1001 19:20:37.190848   31154 main.go:141] libmachine: (ha-193737) Calling .Close
	I1001 19:20:37.191111   31154 main.go:141] libmachine: Successfully made call to close driver server
	I1001 19:20:37.191115   31154 main.go:141] libmachine: Successfully made call to close driver server
	I1001 19:20:37.191125   31154 main.go:141] libmachine: (ha-193737) DBG | Closing plugin on server side
	I1001 19:20:37.191134   31154 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 19:20:37.191134   31154 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 19:20:37.191205   31154 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1001 19:20:37.191222   31154 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1001 19:20:37.191338   31154 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1001 19:20:37.191344   31154 round_trippers.go:469] Request Headers:
	I1001 19:20:37.191354   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:20:37.191358   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:20:37.219411   31154 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I1001 19:20:37.219983   31154 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1001 19:20:37.219997   31154 round_trippers.go:469] Request Headers:
	I1001 19:20:37.220005   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:20:37.220008   31154 round_trippers.go:473]     Content-Type: application/json
	I1001 19:20:37.220011   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:20:37.228402   31154 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1001 19:20:37.228596   31154 main.go:141] libmachine: Making call to close driver server
	I1001 19:20:37.228610   31154 main.go:141] libmachine: (ha-193737) Calling .Close
	I1001 19:20:37.228929   31154 main.go:141] libmachine: Successfully made call to close driver server
	I1001 19:20:37.228950   31154 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 19:20:37.228974   31154 main.go:141] libmachine: (ha-193737) DBG | Closing plugin on server side
	I1001 19:20:37.230600   31154 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1001 19:20:37.231770   31154 addons.go:510] duration metric: took 1.012023889s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1001 19:20:37.231812   31154 start.go:246] waiting for cluster config update ...
	I1001 19:20:37.231823   31154 start.go:255] writing updated cluster config ...
	I1001 19:20:37.233187   31154 out.go:201] 
	I1001 19:20:37.234563   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:20:37.234629   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:20:37.236253   31154 out.go:177] * Starting "ha-193737-m02" control-plane node in "ha-193737" cluster
	I1001 19:20:37.237974   31154 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:20:37.238007   31154 cache.go:56] Caching tarball of preloaded images
	I1001 19:20:37.238089   31154 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 19:20:37.238106   31154 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 19:20:37.238204   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:20:37.238426   31154 start.go:360] acquireMachinesLock for ha-193737-m02: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 19:20:37.238490   31154 start.go:364] duration metric: took 37.598µs to acquireMachinesLock for "ha-193737-m02"
	I1001 19:20:37.238511   31154 start.go:93] Provisioning new machine with config: &{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:20:37.238603   31154 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1001 19:20:37.240050   31154 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 19:20:37.240148   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:37.240181   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:37.256492   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43983
	I1001 19:20:37.257003   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:37.257628   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:37.257663   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:37.258069   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:37.258273   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetMachineName
	I1001 19:20:37.258413   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:37.258584   31154 start.go:159] libmachine.API.Create for "ha-193737" (driver="kvm2")
	I1001 19:20:37.258609   31154 client.go:168] LocalClient.Create starting
	I1001 19:20:37.258644   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem
	I1001 19:20:37.258691   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:20:37.258706   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:20:37.258752   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem
	I1001 19:20:37.258775   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:20:37.258791   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:20:37.258820   31154 main.go:141] libmachine: Running pre-create checks...
	I1001 19:20:37.258831   31154 main.go:141] libmachine: (ha-193737-m02) Calling .PreCreateCheck
	I1001 19:20:37.258981   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetConfigRaw
	I1001 19:20:37.259499   31154 main.go:141] libmachine: Creating machine...
	I1001 19:20:37.259521   31154 main.go:141] libmachine: (ha-193737-m02) Calling .Create
	I1001 19:20:37.259645   31154 main.go:141] libmachine: (ha-193737-m02) Creating KVM machine...
	I1001 19:20:37.261171   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found existing default KVM network
	I1001 19:20:37.261376   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found existing private KVM network mk-ha-193737
	I1001 19:20:37.261582   31154 main.go:141] libmachine: (ha-193737-m02) Setting up store path in /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02 ...
	I1001 19:20:37.261615   31154 main.go:141] libmachine: (ha-193737-m02) Building disk image from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 19:20:37.261632   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:37.261518   31541 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:20:37.261750   31154 main.go:141] libmachine: (ha-193737-m02) Downloading /home/jenkins/minikube-integration/19736-11198/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 19:20:37.511803   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:37.511639   31541 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa...
	I1001 19:20:37.705703   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:37.705550   31541 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/ha-193737-m02.rawdisk...
	I1001 19:20:37.705738   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Writing magic tar header
	I1001 19:20:37.705753   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Writing SSH key tar header
	I1001 19:20:37.705765   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:37.705670   31541 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02 ...
	I1001 19:20:37.705777   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02 (perms=drwx------)
	I1001 19:20:37.705791   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines (perms=drwxr-xr-x)
	I1001 19:20:37.705802   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02
	I1001 19:20:37.705808   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube (perms=drwxr-xr-x)
	I1001 19:20:37.705819   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198 (perms=drwxrwxr-x)
	I1001 19:20:37.705827   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 19:20:37.705840   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 19:20:37.705857   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines
	I1001 19:20:37.705865   31154 main.go:141] libmachine: (ha-193737-m02) Creating domain...
	I1001 19:20:37.705882   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:20:37.705895   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198
	I1001 19:20:37.705908   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 19:20:37.705917   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins
	I1001 19:20:37.705926   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home
	I1001 19:20:37.705934   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Skipping /home - not owner
	I1001 19:20:37.706847   31154 main.go:141] libmachine: (ha-193737-m02) define libvirt domain using xml: 
	I1001 19:20:37.706866   31154 main.go:141] libmachine: (ha-193737-m02) <domain type='kvm'>
	I1001 19:20:37.706875   31154 main.go:141] libmachine: (ha-193737-m02)   <name>ha-193737-m02</name>
	I1001 19:20:37.706882   31154 main.go:141] libmachine: (ha-193737-m02)   <memory unit='MiB'>2200</memory>
	I1001 19:20:37.706889   31154 main.go:141] libmachine: (ha-193737-m02)   <vcpu>2</vcpu>
	I1001 19:20:37.706899   31154 main.go:141] libmachine: (ha-193737-m02)   <features>
	I1001 19:20:37.706907   31154 main.go:141] libmachine: (ha-193737-m02)     <acpi/>
	I1001 19:20:37.706913   31154 main.go:141] libmachine: (ha-193737-m02)     <apic/>
	I1001 19:20:37.706921   31154 main.go:141] libmachine: (ha-193737-m02)     <pae/>
	I1001 19:20:37.706927   31154 main.go:141] libmachine: (ha-193737-m02)     
	I1001 19:20:37.706935   31154 main.go:141] libmachine: (ha-193737-m02)   </features>
	I1001 19:20:37.706943   31154 main.go:141] libmachine: (ha-193737-m02)   <cpu mode='host-passthrough'>
	I1001 19:20:37.706947   31154 main.go:141] libmachine: (ha-193737-m02)   
	I1001 19:20:37.706951   31154 main.go:141] libmachine: (ha-193737-m02)   </cpu>
	I1001 19:20:37.706958   31154 main.go:141] libmachine: (ha-193737-m02)   <os>
	I1001 19:20:37.706963   31154 main.go:141] libmachine: (ha-193737-m02)     <type>hvm</type>
	I1001 19:20:37.706969   31154 main.go:141] libmachine: (ha-193737-m02)     <boot dev='cdrom'/>
	I1001 19:20:37.706979   31154 main.go:141] libmachine: (ha-193737-m02)     <boot dev='hd'/>
	I1001 19:20:37.706999   31154 main.go:141] libmachine: (ha-193737-m02)     <bootmenu enable='no'/>
	I1001 19:20:37.707014   31154 main.go:141] libmachine: (ha-193737-m02)   </os>
	I1001 19:20:37.707026   31154 main.go:141] libmachine: (ha-193737-m02)   <devices>
	I1001 19:20:37.707037   31154 main.go:141] libmachine: (ha-193737-m02)     <disk type='file' device='cdrom'>
	I1001 19:20:37.707052   31154 main.go:141] libmachine: (ha-193737-m02)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/boot2docker.iso'/>
	I1001 19:20:37.707067   31154 main.go:141] libmachine: (ha-193737-m02)       <target dev='hdc' bus='scsi'/>
	I1001 19:20:37.707078   31154 main.go:141] libmachine: (ha-193737-m02)       <readonly/>
	I1001 19:20:37.707090   31154 main.go:141] libmachine: (ha-193737-m02)     </disk>
	I1001 19:20:37.707105   31154 main.go:141] libmachine: (ha-193737-m02)     <disk type='file' device='disk'>
	I1001 19:20:37.707118   31154 main.go:141] libmachine: (ha-193737-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 19:20:37.707132   31154 main.go:141] libmachine: (ha-193737-m02)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/ha-193737-m02.rawdisk'/>
	I1001 19:20:37.707142   31154 main.go:141] libmachine: (ha-193737-m02)       <target dev='hda' bus='virtio'/>
	I1001 19:20:37.707150   31154 main.go:141] libmachine: (ha-193737-m02)     </disk>
	I1001 19:20:37.707164   31154 main.go:141] libmachine: (ha-193737-m02)     <interface type='network'>
	I1001 19:20:37.707176   31154 main.go:141] libmachine: (ha-193737-m02)       <source network='mk-ha-193737'/>
	I1001 19:20:37.707186   31154 main.go:141] libmachine: (ha-193737-m02)       <model type='virtio'/>
	I1001 19:20:37.707196   31154 main.go:141] libmachine: (ha-193737-m02)     </interface>
	I1001 19:20:37.707206   31154 main.go:141] libmachine: (ha-193737-m02)     <interface type='network'>
	I1001 19:20:37.707217   31154 main.go:141] libmachine: (ha-193737-m02)       <source network='default'/>
	I1001 19:20:37.707227   31154 main.go:141] libmachine: (ha-193737-m02)       <model type='virtio'/>
	I1001 19:20:37.707241   31154 main.go:141] libmachine: (ha-193737-m02)     </interface>
	I1001 19:20:37.707259   31154 main.go:141] libmachine: (ha-193737-m02)     <serial type='pty'>
	I1001 19:20:37.707267   31154 main.go:141] libmachine: (ha-193737-m02)       <target port='0'/>
	I1001 19:20:37.707272   31154 main.go:141] libmachine: (ha-193737-m02)     </serial>
	I1001 19:20:37.707279   31154 main.go:141] libmachine: (ha-193737-m02)     <console type='pty'>
	I1001 19:20:37.707283   31154 main.go:141] libmachine: (ha-193737-m02)       <target type='serial' port='0'/>
	I1001 19:20:37.707290   31154 main.go:141] libmachine: (ha-193737-m02)     </console>
	I1001 19:20:37.707295   31154 main.go:141] libmachine: (ha-193737-m02)     <rng model='virtio'>
	I1001 19:20:37.707303   31154 main.go:141] libmachine: (ha-193737-m02)       <backend model='random'>/dev/random</backend>
	I1001 19:20:37.707306   31154 main.go:141] libmachine: (ha-193737-m02)     </rng>
	I1001 19:20:37.707313   31154 main.go:141] libmachine: (ha-193737-m02)     
	I1001 19:20:37.707317   31154 main.go:141] libmachine: (ha-193737-m02)     
	I1001 19:20:37.707323   31154 main.go:141] libmachine: (ha-193737-m02)   </devices>
	I1001 19:20:37.707331   31154 main.go:141] libmachine: (ha-193737-m02) </domain>
	I1001 19:20:37.707362   31154 main.go:141] libmachine: (ha-193737-m02) 
	I1001 19:20:37.714050   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:2e:69:af in network default
	I1001 19:20:37.714587   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:37.714605   31154 main.go:141] libmachine: (ha-193737-m02) Ensuring networks are active...
	I1001 19:20:37.715386   31154 main.go:141] libmachine: (ha-193737-m02) Ensuring network default is active
	I1001 19:20:37.715688   31154 main.go:141] libmachine: (ha-193737-m02) Ensuring network mk-ha-193737 is active
	I1001 19:20:37.716026   31154 main.go:141] libmachine: (ha-193737-m02) Getting domain xml...
	I1001 19:20:37.716683   31154 main.go:141] libmachine: (ha-193737-m02) Creating domain...
	I1001 19:20:38.946823   31154 main.go:141] libmachine: (ha-193737-m02) Waiting to get IP...
	I1001 19:20:38.947612   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:38.948069   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:38.948111   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:38.948057   31541 retry.go:31] will retry after 211.487702ms: waiting for machine to come up
	I1001 19:20:39.161472   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:39.161945   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:39.161981   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:39.161920   31541 retry.go:31] will retry after 369.29813ms: waiting for machine to come up
	I1001 19:20:39.532486   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:39.533006   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:39.533034   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:39.532951   31541 retry.go:31] will retry after 340.79833ms: waiting for machine to come up
	I1001 19:20:39.875453   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:39.875902   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:39.875928   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:39.875855   31541 retry.go:31] will retry after 558.36179ms: waiting for machine to come up
	I1001 19:20:40.435617   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:40.436128   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:40.436156   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:40.436070   31541 retry.go:31] will retry after 724.412456ms: waiting for machine to come up
	I1001 19:20:41.161753   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:41.162215   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:41.162238   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:41.162183   31541 retry.go:31] will retry after 921.122771ms: waiting for machine to come up
	I1001 19:20:42.085509   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:42.085978   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:42.086002   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:42.085932   31541 retry.go:31] will retry after 886.914683ms: waiting for machine to come up
	I1001 19:20:42.974460   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:42.974900   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:42.974926   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:42.974856   31541 retry.go:31] will retry after 1.455695023s: waiting for machine to come up
	I1001 19:20:44.432773   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:44.433336   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:44.433365   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:44.433292   31541 retry.go:31] will retry after 1.415796379s: waiting for machine to come up
	I1001 19:20:45.850938   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:45.851337   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:45.851357   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:45.851309   31541 retry.go:31] will retry after 1.972979972s: waiting for machine to come up
	I1001 19:20:47.825356   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:47.825785   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:47.825812   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:47.825732   31541 retry.go:31] will retry after 1.92262401s: waiting for machine to come up
	I1001 19:20:49.750763   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:49.751160   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:49.751177   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:49.751137   31541 retry.go:31] will retry after 3.587777506s: waiting for machine to come up
	I1001 19:20:53.340173   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:53.340566   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:53.340617   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:53.340558   31541 retry.go:31] will retry after 3.748563727s: waiting for machine to come up
	I1001 19:20:57.093502   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.094007   31154 main.go:141] libmachine: (ha-193737-m02) Found IP for machine: 192.168.39.27
	I1001 19:20:57.094023   31154 main.go:141] libmachine: (ha-193737-m02) Reserving static IP address...
	I1001 19:20:57.094037   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has current primary IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.094391   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find host DHCP lease matching {name: "ha-193737-m02", mac: "52:54:00:7b:e4:d4", ip: "192.168.39.27"} in network mk-ha-193737
	I1001 19:20:57.171234   31154 main.go:141] libmachine: (ha-193737-m02) Reserved static IP address: 192.168.39.27
	I1001 19:20:57.171257   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Getting to WaitForSSH function...
	I1001 19:20:57.171265   31154 main.go:141] libmachine: (ha-193737-m02) Waiting for SSH to be available...
	I1001 19:20:57.173965   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.174561   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.174594   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.174717   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Using SSH client type: external
	I1001 19:20:57.174748   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa (-rw-------)
	I1001 19:20:57.174779   31154 main.go:141] libmachine: (ha-193737-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.27 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 19:20:57.174794   31154 main.go:141] libmachine: (ha-193737-m02) DBG | About to run SSH command:
	I1001 19:20:57.174810   31154 main.go:141] libmachine: (ha-193737-m02) DBG | exit 0
	I1001 19:20:57.304572   31154 main.go:141] libmachine: (ha-193737-m02) DBG | SSH cmd err, output: <nil>: 
	I1001 19:20:57.304868   31154 main.go:141] libmachine: (ha-193737-m02) KVM machine creation complete!
	I1001 19:20:57.305162   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetConfigRaw
	I1001 19:20:57.305752   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:57.305953   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:57.306163   31154 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 19:20:57.306232   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetState
	I1001 19:20:57.307715   31154 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 19:20:57.307729   31154 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 19:20:57.307736   31154 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 19:20:57.307743   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:57.310409   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.310801   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.310826   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.310956   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:57.311136   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.311267   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.311408   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:57.311603   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:57.311799   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:57.311811   31154 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 19:20:57.423687   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:20:57.423716   31154 main.go:141] libmachine: Detecting the provisioner...
	I1001 19:20:57.423741   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:57.426918   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.427323   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.427358   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.427583   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:57.427788   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.428027   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.428201   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:57.428392   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:57.428632   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:57.428762   31154 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 19:20:57.541173   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 19:20:57.541232   31154 main.go:141] libmachine: found compatible host: buildroot
	I1001 19:20:57.541238   31154 main.go:141] libmachine: Provisioning with buildroot...
	I1001 19:20:57.541245   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetMachineName
	I1001 19:20:57.541504   31154 buildroot.go:166] provisioning hostname "ha-193737-m02"
	I1001 19:20:57.541527   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetMachineName
	I1001 19:20:57.541689   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:57.544406   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.544791   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.544830   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.544962   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:57.545135   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.545283   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.545382   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:57.545543   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:57.545753   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:57.545769   31154 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-193737-m02 && echo "ha-193737-m02" | sudo tee /etc/hostname
	I1001 19:20:57.675116   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-193737-m02
	
	I1001 19:20:57.675147   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:57.678239   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.678600   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.678624   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.678822   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:57.679011   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.679146   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.679254   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:57.679397   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:57.679573   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:57.679599   31154 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-193737-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-193737-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-193737-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 19:20:57.800899   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:20:57.800928   31154 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 19:20:57.800946   31154 buildroot.go:174] setting up certificates
	I1001 19:20:57.800957   31154 provision.go:84] configureAuth start
	I1001 19:20:57.800969   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetMachineName
	I1001 19:20:57.801194   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetIP
	I1001 19:20:57.803613   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.803954   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.803982   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.804134   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:57.806340   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.806657   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.806678   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.806860   31154 provision.go:143] copyHostCerts
	I1001 19:20:57.806892   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:20:57.806929   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 19:20:57.806937   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:20:57.807013   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 19:20:57.807084   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:20:57.807101   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 19:20:57.807107   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:20:57.807131   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 19:20:57.807178   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:20:57.807196   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 19:20:57.807202   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:20:57.807221   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 19:20:57.807269   31154 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.ha-193737-m02 san=[127.0.0.1 192.168.39.27 ha-193737-m02 localhost minikube]
	I1001 19:20:58.056549   31154 provision.go:177] copyRemoteCerts
	I1001 19:20:58.056608   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 19:20:58.056631   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:58.059291   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.059620   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.059653   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.059823   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.060033   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.060174   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.060291   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa Username:docker}
	I1001 19:20:58.146502   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 19:20:58.146577   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 19:20:58.170146   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 19:20:58.170211   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 19:20:58.193090   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 19:20:58.193172   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 19:20:58.215033   31154 provision.go:87] duration metric: took 414.061487ms to configureAuth
	I1001 19:20:58.215067   31154 buildroot.go:189] setting minikube options for container-runtime
	I1001 19:20:58.215250   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:20:58.215327   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:58.218149   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.218497   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.218527   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.218653   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.218868   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.219033   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.219156   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.219300   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:58.219460   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:58.219473   31154 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 19:20:58.470145   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 19:20:58.470178   31154 main.go:141] libmachine: Checking connection to Docker...
	I1001 19:20:58.470189   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetURL
	I1001 19:20:58.471402   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Using libvirt version 6000000
	I1001 19:20:58.474024   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.474371   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.474412   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.474613   31154 main.go:141] libmachine: Docker is up and running!
	I1001 19:20:58.474631   31154 main.go:141] libmachine: Reticulating splines...
	I1001 19:20:58.474639   31154 client.go:171] duration metric: took 21.216022282s to LocalClient.Create
	I1001 19:20:58.474664   31154 start.go:167] duration metric: took 21.216081227s to libmachine.API.Create "ha-193737"
	I1001 19:20:58.474674   31154 start.go:293] postStartSetup for "ha-193737-m02" (driver="kvm2")
	I1001 19:20:58.474687   31154 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 19:20:58.474711   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:58.475026   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 19:20:58.475056   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:58.477612   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.478051   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.478084   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.478170   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.478359   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.478475   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.478613   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa Username:docker}
	I1001 19:20:58.566449   31154 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 19:20:58.570622   31154 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 19:20:58.570648   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 19:20:58.570715   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 19:20:58.570786   31154 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 19:20:58.570798   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /etc/ssl/certs/184302.pem
	I1001 19:20:58.570944   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 19:20:58.579535   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:20:58.601457   31154 start.go:296] duration metric: took 126.771104ms for postStartSetup
	I1001 19:20:58.601513   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetConfigRaw
	I1001 19:20:58.602068   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetIP
	I1001 19:20:58.604495   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.604874   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.604900   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.605223   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:20:58.605434   31154 start.go:128] duration metric: took 21.366818669s to createHost
	I1001 19:20:58.605467   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:58.607650   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.608026   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.608051   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.608184   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.608337   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.608453   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.608557   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.608693   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:58.608837   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:58.608847   31154 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 19:20:58.721980   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727810458.681508368
	
	I1001 19:20:58.722008   31154 fix.go:216] guest clock: 1727810458.681508368
	I1001 19:20:58.722018   31154 fix.go:229] Guest: 2024-10-01 19:20:58.681508368 +0000 UTC Remote: 2024-10-01 19:20:58.605448095 +0000 UTC m=+70.833286913 (delta=76.060273ms)
	I1001 19:20:58.722040   31154 fix.go:200] guest clock delta is within tolerance: 76.060273ms
	I1001 19:20:58.722049   31154 start.go:83] releasing machines lock for "ha-193737-m02", held for 21.483548504s
	I1001 19:20:58.722074   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:58.722316   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetIP
	I1001 19:20:58.725092   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.725406   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.725439   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.727497   31154 out.go:177] * Found network options:
	I1001 19:20:58.728546   31154 out.go:177]   - NO_PROXY=192.168.39.14
	W1001 19:20:58.729434   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 19:20:58.729479   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:58.729929   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:58.730082   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:58.730149   31154 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 19:20:58.730189   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	W1001 19:20:58.730253   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 19:20:58.730326   31154 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 19:20:58.730347   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:58.732847   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.732897   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.733209   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.733238   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.733263   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.733277   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.733405   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.733481   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.733618   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.733656   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.733727   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.733802   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.733822   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa Username:docker}
	I1001 19:20:58.733934   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa Username:docker}
	I1001 19:20:58.972871   31154 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 19:20:58.978194   31154 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 19:20:58.978260   31154 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 19:20:58.994663   31154 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 19:20:58.994684   31154 start.go:495] detecting cgroup driver to use...
	I1001 19:20:58.994738   31154 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 19:20:59.011009   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 19:20:59.025521   31154 docker.go:217] disabling cri-docker service (if available) ...
	I1001 19:20:59.025608   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 19:20:59.039348   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 19:20:59.052807   31154 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 19:20:59.169289   31154 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 19:20:59.334757   31154 docker.go:233] disabling docker service ...
	I1001 19:20:59.334834   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 19:20:59.348035   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 19:20:59.360660   31154 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 19:20:59.486509   31154 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 19:20:59.604588   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 19:20:59.617998   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 19:20:59.635554   31154 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 19:20:59.635626   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.645574   31154 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 19:20:59.645648   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.655487   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.665223   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.674970   31154 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 19:20:59.684872   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.694696   31154 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.710618   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.721089   31154 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 19:20:59.731283   31154 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 19:20:59.731352   31154 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 19:20:59.746274   31154 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 19:20:59.756184   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:20:59.870307   31154 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 19:20:59.956939   31154 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 19:20:59.957022   31154 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 19:20:59.961766   31154 start.go:563] Will wait 60s for crictl version
	I1001 19:20:59.961831   31154 ssh_runner.go:195] Run: which crictl
	I1001 19:20:59.965776   31154 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 19:21:00.010361   31154 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 19:21:00.010446   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:21:00.041083   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:21:00.075668   31154 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 19:21:00.077105   31154 out.go:177]   - env NO_PROXY=192.168.39.14
	I1001 19:21:00.078374   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetIP
	I1001 19:21:00.081375   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:21:00.081679   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:21:00.081711   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:21:00.081983   31154 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 19:21:00.086306   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:21:00.099180   31154 mustload.go:65] Loading cluster: ha-193737
	I1001 19:21:00.099450   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:21:00.099790   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:21:00.099833   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:21:00.115527   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43263
	I1001 19:21:00.116081   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:21:00.116546   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:21:00.116565   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:21:00.116887   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:21:00.117121   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:21:00.118679   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:21:00.118968   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:21:00.119005   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:21:00.133660   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37793
	I1001 19:21:00.134171   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:21:00.134638   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:21:00.134657   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:21:00.134945   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:21:00.135112   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:21:00.135251   31154 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737 for IP: 192.168.39.27
	I1001 19:21:00.135263   31154 certs.go:194] generating shared ca certs ...
	I1001 19:21:00.135281   31154 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:21:00.135407   31154 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 19:21:00.135448   31154 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 19:21:00.135454   31154 certs.go:256] generating profile certs ...
	I1001 19:21:00.135523   31154 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key
	I1001 19:21:00.135547   31154 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.b6f75b80
	I1001 19:21:00.135561   31154 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.b6f75b80 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.14 192.168.39.27 192.168.39.254]
	I1001 19:21:00.686434   31154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.b6f75b80 ...
	I1001 19:21:00.686467   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.b6f75b80: {Name:mkeb01bd9448160d7d89858bc8ed1c53818e2061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:21:00.686650   31154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.b6f75b80 ...
	I1001 19:21:00.686663   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.b6f75b80: {Name:mk3a8c2ce4c29185d261167caf7207467c082c1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:21:00.686733   31154 certs.go:381] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.b6f75b80 -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt
	I1001 19:21:00.686905   31154 certs.go:385] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.b6f75b80 -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key
	I1001 19:21:00.687041   31154 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key
	I1001 19:21:00.687055   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 19:21:00.687068   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 19:21:00.687080   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 19:21:00.687093   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 19:21:00.687105   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 19:21:00.687117   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 19:21:00.687128   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 19:21:00.687140   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 19:21:00.687188   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 19:21:00.687218   31154 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 19:21:00.687227   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 19:21:00.687249   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 19:21:00.687269   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 19:21:00.687290   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 19:21:00.687321   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:21:00.687345   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:21:00.687358   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem -> /usr/share/ca-certificates/18430.pem
	I1001 19:21:00.687370   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /usr/share/ca-certificates/184302.pem
	I1001 19:21:00.687398   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:21:00.690221   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:21:00.690721   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:21:00.690750   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:21:00.690891   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:21:00.691103   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:21:00.691297   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:21:00.691469   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:21:00.764849   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1001 19:21:00.770067   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1001 19:21:00.781099   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1001 19:21:00.785191   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1001 19:21:00.796213   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1001 19:21:00.800405   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1001 19:21:00.810899   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1001 19:21:00.815556   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1001 19:21:00.825792   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1001 19:21:00.830049   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1001 19:21:00.841022   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1001 19:21:00.845622   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1001 19:21:00.857011   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 19:21:00.881387   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 19:21:00.905420   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 19:21:00.930584   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 19:21:00.957479   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1001 19:21:00.982115   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 19:21:01.005996   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 19:21:01.031948   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 19:21:01.059129   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 19:21:01.084143   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 19:21:01.109909   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 19:21:01.133720   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1001 19:21:01.150500   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1001 19:21:01.168599   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1001 19:21:01.185368   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1001 19:21:01.202279   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1001 19:21:01.218930   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1001 19:21:01.235286   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1001 19:21:01.251963   31154 ssh_runner.go:195] Run: openssl version
	I1001 19:21:01.257542   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 19:21:01.268254   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:21:01.272732   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:21:01.272802   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:21:01.278777   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 19:21:01.290880   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 19:21:01.301840   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 19:21:01.306397   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 19:21:01.306469   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 19:21:01.312313   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 19:21:01.322717   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 19:21:01.333015   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 19:21:01.337340   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 19:21:01.337400   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 19:21:01.343033   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 19:21:01.354495   31154 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 19:21:01.358223   31154 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 19:21:01.358275   31154 kubeadm.go:934] updating node {m02 192.168.39.27 8443 v1.31.1 crio true true} ...
	I1001 19:21:01.358349   31154 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-193737-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 19:21:01.358373   31154 kube-vip.go:115] generating kube-vip config ...
	I1001 19:21:01.358405   31154 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 19:21:01.374873   31154 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 19:21:01.374943   31154 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1001 19:21:01.374989   31154 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 19:21:01.384444   31154 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1001 19:21:01.384518   31154 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1001 19:21:01.394161   31154 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1001 19:21:01.394190   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 19:21:01.394191   31154 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1001 19:21:01.394256   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 19:21:01.394189   31154 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1001 19:21:01.398439   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1001 19:21:01.398487   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1001 19:21:02.673266   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 19:21:02.673366   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 19:21:02.678383   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1001 19:21:02.678421   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1001 19:21:02.683681   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 19:21:02.723149   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 19:21:02.723251   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 19:21:02.737865   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1001 19:21:02.737908   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1001 19:21:03.230970   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1001 19:21:03.240943   31154 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1001 19:21:03.257655   31154 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 19:21:03.274741   31154 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1001 19:21:03.291537   31154 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 19:21:03.295338   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:21:03.307165   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:21:03.463069   31154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:21:03.480147   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:21:03.480689   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:21:03.480744   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:21:03.495841   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39579
	I1001 19:21:03.496320   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:21:03.496880   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:21:03.496904   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:21:03.497248   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:21:03.497421   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:21:03.497546   31154 start.go:317] joinCluster: &{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:21:03.497680   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1001 19:21:03.497702   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:21:03.500751   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:21:03.501276   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:21:03.501306   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:21:03.501495   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:21:03.501701   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:21:03.501893   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:21:03.502064   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:21:03.648333   31154 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:21:03.648405   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n692vg.wpdyj1cg443tmqgp --discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-193737-m02 --control-plane --apiserver-advertise-address=192.168.39.27 --apiserver-bind-port=8443"
	I1001 19:21:25.467048   31154 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n692vg.wpdyj1cg443tmqgp --discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-193737-m02 --control-plane --apiserver-advertise-address=192.168.39.27 --apiserver-bind-port=8443": (21.818614216s)
	I1001 19:21:25.467085   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1001 19:21:26.061914   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-193737-m02 minikube.k8s.io/updated_at=2024_10_01T19_21_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=ha-193737 minikube.k8s.io/primary=false
	I1001 19:21:26.203974   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-193737-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1001 19:21:26.315094   31154 start.go:319] duration metric: took 22.817544624s to joinCluster
	I1001 19:21:26.315164   31154 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:21:26.315617   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:21:26.316452   31154 out.go:177] * Verifying Kubernetes components...
	I1001 19:21:26.317646   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:21:26.611377   31154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:21:26.640565   31154 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:21:26.640891   31154 kapi.go:59] client config for ha-193737: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key", CAFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1001 19:21:26.640968   31154 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.14:8443
	I1001 19:21:26.641227   31154 node_ready.go:35] waiting up to 6m0s for node "ha-193737-m02" to be "Ready" ...
	I1001 19:21:26.641356   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:26.641366   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:26.641375   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:26.641380   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:26.653154   31154 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1001 19:21:27.141735   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:27.141756   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:27.141764   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:27.141768   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:27.148495   31154 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1001 19:21:27.641626   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:27.641661   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:27.641672   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:27.641677   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:27.646178   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:28.142172   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:28.142200   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:28.142210   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:28.142216   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:28.146315   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:28.641888   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:28.641917   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:28.641931   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:28.641940   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:28.645578   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:28.646211   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:29.141557   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:29.141582   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:29.141592   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:29.141597   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:29.146956   31154 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1001 19:21:29.641796   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:29.641817   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:29.641824   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:29.641829   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:29.645155   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:30.142079   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:30.142103   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:30.142114   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:30.142119   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:30.145277   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:30.642189   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:30.642209   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:30.642217   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:30.642220   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:30.646863   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:30.647494   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:31.141763   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:31.141784   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:31.141796   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:31.141801   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:31.145813   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:31.641815   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:31.641836   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:31.641847   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:31.641853   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:31.645200   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:32.141448   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:32.141473   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:32.141486   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:32.141493   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:32.145295   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:32.641622   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:32.641643   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:32.641649   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:32.641653   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:32.645174   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:33.141797   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:33.141818   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:33.141826   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:33.141830   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:33.145091   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:33.145688   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:33.641422   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:33.641445   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:33.641454   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:33.641464   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:33.644675   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:34.141560   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:34.141589   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:34.141601   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:34.141607   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:34.145278   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:34.641659   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:34.641678   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:34.641686   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:34.641691   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:34.644811   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:35.142049   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:35.142075   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:35.142083   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:35.142087   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:35.145002   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:35.641531   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:35.641559   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:35.641573   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:35.641586   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:35.644829   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:35.645348   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:36.141635   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:36.141655   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:36.141663   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:36.141668   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:36.144536   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:36.642098   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:36.642119   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:36.642127   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:36.642130   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:36.645313   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:37.142420   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:37.142468   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:37.142477   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:37.142481   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:37.145780   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:37.641627   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:37.641647   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:37.641655   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:37.641659   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:37.644484   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:38.142220   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:38.142244   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:38.142255   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:38.142262   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:38.145466   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:38.146172   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:38.641992   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:38.642015   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:38.642024   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:38.642028   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:38.644515   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:39.141559   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:39.141585   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:39.141595   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:39.141601   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:39.145034   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:39.641804   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:39.641838   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:39.641845   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:39.641850   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:39.646296   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:40.142227   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:40.142248   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:40.142256   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:40.142260   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:40.145591   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:40.642234   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:40.642258   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:40.642267   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:40.642271   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:40.645384   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:40.646037   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:41.142410   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:41.142429   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:41.142437   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:41.142441   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:41.145729   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:41.642146   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:41.642167   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:41.642174   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:41.642178   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:41.645647   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:42.141537   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:42.141559   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:42.141569   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:42.141575   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:42.144817   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:42.642106   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:42.642127   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:42.642136   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:42.642141   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:42.645934   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:42.646419   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:43.141441   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:43.141464   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:43.141472   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:43.141476   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:43.144793   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:43.642316   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:43.642337   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:43.642345   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:43.642351   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:43.646007   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:44.142085   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:44.142106   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:44.142114   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:44.142117   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:44.145431   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:44.642346   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:44.642368   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:44.642376   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:44.642379   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:44.645860   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:45.142289   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:45.142312   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.142323   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.142330   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.145780   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:45.146379   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:45.641699   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:45.641725   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.641733   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.641736   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.645813   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:45.646591   31154 node_ready.go:49] node "ha-193737-m02" has status "Ready":"True"
	I1001 19:21:45.646618   31154 node_ready.go:38] duration metric: took 19.005351721s for node "ha-193737-m02" to be "Ready" ...
	I1001 19:21:45.646627   31154 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 19:21:45.646691   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:21:45.646700   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.646707   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.646713   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.650655   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:45.657881   31154 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.657971   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hd5hv
	I1001 19:21:45.657980   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.657988   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.657993   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.660900   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.661620   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:45.661639   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.661649   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.661657   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.665733   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:45.666386   31154 pod_ready.go:93] pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:45.666409   31154 pod_ready.go:82] duration metric: took 8.499445ms for pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.666421   31154 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.666492   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-v2wsx
	I1001 19:21:45.666502   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.666512   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.666518   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.669133   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.669889   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:45.669907   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.669918   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.669923   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.672275   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.672755   31154 pod_ready.go:93] pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:45.672774   31154 pod_ready.go:82] duration metric: took 6.344856ms for pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.672786   31154 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.672846   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737
	I1001 19:21:45.672857   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.672867   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.672872   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.675287   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.675893   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:45.675911   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.675922   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.675930   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.678241   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.678741   31154 pod_ready.go:93] pod "etcd-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:45.678763   31154 pod_ready.go:82] duration metric: took 5.967949ms for pod "etcd-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.678772   31154 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.678833   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737-m02
	I1001 19:21:45.678850   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.678858   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.678871   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.681191   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.681800   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:45.681815   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.681825   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.681830   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.683889   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.684431   31154 pod_ready.go:93] pod "etcd-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:45.684453   31154 pod_ready.go:82] duration metric: took 5.673081ms for pod "etcd-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.684473   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.841835   31154 request.go:632] Waited for 157.291258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737
	I1001 19:21:45.841900   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737
	I1001 19:21:45.841906   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.841913   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.841919   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.845357   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.042508   31154 request.go:632] Waited for 196.405333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:46.042588   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:46.042599   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:46.042611   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:46.042619   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:46.046254   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.046866   31154 pod_ready.go:93] pod "kube-apiserver-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:46.046884   31154 pod_ready.go:82] duration metric: took 362.399581ms for pod "kube-apiserver-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:46.046893   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:46.242039   31154 request.go:632] Waited for 195.063872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m02
	I1001 19:21:46.242144   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m02
	I1001 19:21:46.242157   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:46.242168   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:46.242174   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:46.246032   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.441916   31154 request.go:632] Waited for 195.330252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:46.441997   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:46.442003   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:46.442011   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:46.442014   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:46.445457   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.445994   31154 pod_ready.go:93] pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:46.446014   31154 pod_ready.go:82] duration metric: took 399.112887ms for pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:46.446031   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:46.642080   31154 request.go:632] Waited for 195.96912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737
	I1001 19:21:46.642133   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737
	I1001 19:21:46.642138   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:46.642146   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:46.642149   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:46.645872   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.842116   31154 request.go:632] Waited for 195.42226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:46.842206   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:46.842215   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:46.842223   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:46.842231   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:46.845287   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.845743   31154 pod_ready.go:93] pod "kube-controller-manager-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:46.845760   31154 pod_ready.go:82] duration metric: took 399.720077ms for pod "kube-controller-manager-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:46.845770   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:47.042048   31154 request.go:632] Waited for 196.194982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m02
	I1001 19:21:47.042116   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m02
	I1001 19:21:47.042122   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:47.042129   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:47.042134   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:47.045174   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:47.242154   31154 request.go:632] Waited for 196.389668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:47.242211   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:47.242216   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:47.242224   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:47.242228   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:47.246078   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:47.246437   31154 pod_ready.go:93] pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:47.246460   31154 pod_ready.go:82] duration metric: took 400.684034ms for pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:47.246470   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4294m" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:47.442023   31154 request.go:632] Waited for 195.496186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4294m
	I1001 19:21:47.442102   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4294m
	I1001 19:21:47.442107   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:47.442115   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:47.442119   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:47.446724   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:47.642099   31154 request.go:632] Waited for 194.348221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:47.642163   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:47.642174   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:47.642181   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:47.642186   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:47.645393   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:47.645928   31154 pod_ready.go:93] pod "kube-proxy-4294m" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:47.645950   31154 pod_ready.go:82] duration metric: took 399.472712ms for pod "kube-proxy-4294m" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:47.645961   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zpsll" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:47.842563   31154 request.go:632] Waited for 196.53672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zpsll
	I1001 19:21:47.842654   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zpsll
	I1001 19:21:47.842670   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:47.842677   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:47.842685   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:47.846435   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:48.042435   31154 request.go:632] Waited for 195.268783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:48.042516   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:48.042523   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.042531   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.042535   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.045444   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:48.045979   31154 pod_ready.go:93] pod "kube-proxy-zpsll" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:48.045999   31154 pod_ready.go:82] duration metric: took 400.030874ms for pod "kube-proxy-zpsll" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:48.046008   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:48.242127   31154 request.go:632] Waited for 196.061352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737
	I1001 19:21:48.242188   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737
	I1001 19:21:48.242194   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.242200   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.242205   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.245701   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:48.442714   31154 request.go:632] Waited for 196.392016ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:48.442788   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:48.442796   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.442806   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.442811   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.445488   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:48.445923   31154 pod_ready.go:93] pod "kube-scheduler-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:48.445941   31154 pod_ready.go:82] duration metric: took 399.927294ms for pod "kube-scheduler-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:48.445950   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:48.642436   31154 request.go:632] Waited for 196.414559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m02
	I1001 19:21:48.642504   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m02
	I1001 19:21:48.642511   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.642520   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.642528   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.645886   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:48.841792   31154 request.go:632] Waited for 195.303821ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:48.841877   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:48.841893   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.841907   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.841917   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.845141   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:48.845610   31154 pod_ready.go:93] pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:48.845627   31154 pod_ready.go:82] duration metric: took 399.670346ms for pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:48.845638   31154 pod_ready.go:39] duration metric: took 3.199000029s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 19:21:48.845650   31154 api_server.go:52] waiting for apiserver process to appear ...
	I1001 19:21:48.845706   31154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 19:21:48.860102   31154 api_server.go:72] duration metric: took 22.544907394s to wait for apiserver process to appear ...
	I1001 19:21:48.860136   31154 api_server.go:88] waiting for apiserver healthz status ...
	I1001 19:21:48.860157   31154 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I1001 19:21:48.864372   31154 api_server.go:279] https://192.168.39.14:8443/healthz returned 200:
	ok
	I1001 19:21:48.864454   31154 round_trippers.go:463] GET https://192.168.39.14:8443/version
	I1001 19:21:48.864464   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.864471   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.864475   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.865481   31154 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1001 19:21:48.865563   31154 api_server.go:141] control plane version: v1.31.1
	I1001 19:21:48.865578   31154 api_server.go:131] duration metric: took 5.43668ms to wait for apiserver health ...
	I1001 19:21:48.865588   31154 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 19:21:49.042005   31154 request.go:632] Waited for 176.346586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:21:49.042080   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:21:49.042086   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:49.042096   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:49.042103   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:49.046797   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:49.050697   31154 system_pods.go:59] 17 kube-system pods found
	I1001 19:21:49.050730   31154 system_pods.go:61] "coredns-7c65d6cfc9-hd5hv" [31f0afff-5571-46d6-888f-8982c71ba191] Running
	I1001 19:21:49.050741   31154 system_pods.go:61] "coredns-7c65d6cfc9-v2wsx" [8e3dd318-5017-4ada-bf2f-61b640ee2030] Running
	I1001 19:21:49.050745   31154 system_pods.go:61] "etcd-ha-193737" [99c3674d-50d6-4160-89dd-afb3c2e71039] Running
	I1001 19:21:49.050749   31154 system_pods.go:61] "etcd-ha-193737-m02" [a541b9c8-c10e-45b4-ac9c-d16e8bf659b0] Running
	I1001 19:21:49.050752   31154 system_pods.go:61] "kindnet-drdlr" [13177890-a0eb-47ff-8fe2-585992810e47] Running
	I1001 19:21:49.050755   31154 system_pods.go:61] "kindnet-wnr6g" [89e11419-0c5c-486e-bdbf-eaf6fab1e62c] Running
	I1001 19:21:49.050758   31154 system_pods.go:61] "kube-apiserver-ha-193737" [432bf6a6-4607-4f55-a026-d910075d3145] Running
	I1001 19:21:49.050761   31154 system_pods.go:61] "kube-apiserver-ha-193737-m02" [379a24af-2148-4870-9f4b-25ad48943445] Running
	I1001 19:21:49.050764   31154 system_pods.go:61] "kube-controller-manager-ha-193737" [95fb44db-8cb3-4db9-8a84-ab82e15760de] Running
	I1001 19:21:49.050768   31154 system_pods.go:61] "kube-controller-manager-ha-193737-m02" [5b0821e0-673a-440c-8ca1-fefbfe913b94] Running
	I1001 19:21:49.050771   31154 system_pods.go:61] "kube-proxy-4294m" [f454ca0f-d662-4dfd-ab77-ebccaf3a6b12] Running
	I1001 19:21:49.050773   31154 system_pods.go:61] "kube-proxy-zpsll" [c18fec3c-2880-4860-b220-a44d5e523bed] Running
	I1001 19:21:49.050777   31154 system_pods.go:61] "kube-scheduler-ha-193737" [46ca2b37-6145-4111-9088-bcf51307be3a] Running
	I1001 19:21:49.050780   31154 system_pods.go:61] "kube-scheduler-ha-193737-m02" [72be6cd5-d226-46d6-b675-99014e544dfb] Running
	I1001 19:21:49.050783   31154 system_pods.go:61] "kube-vip-ha-193737" [cbe8e6a4-08f3-4db3-af4d-810a5592597c] Running
	I1001 19:21:49.050790   31154 system_pods.go:61] "kube-vip-ha-193737-m02" [da7dfa59-1b27-45ce-9e61-ad8da40ec548] Running
	I1001 19:21:49.050793   31154 system_pods.go:61] "storage-provisioner" [d5b587a6-418b-47e5-9bf7-3fb6fa5e3372] Running
	I1001 19:21:49.050802   31154 system_pods.go:74] duration metric: took 185.209049ms to wait for pod list to return data ...
	I1001 19:21:49.050812   31154 default_sa.go:34] waiting for default service account to be created ...
	I1001 19:21:49.242249   31154 request.go:632] Waited for 191.355869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
	I1001 19:21:49.242329   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
	I1001 19:21:49.242336   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:49.242346   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:49.242365   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:49.246320   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:49.246557   31154 default_sa.go:45] found service account: "default"
	I1001 19:21:49.246575   31154 default_sa.go:55] duration metric: took 195.756912ms for default service account to be created ...
	I1001 19:21:49.246582   31154 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 19:21:49.442016   31154 request.go:632] Waited for 195.370336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:21:49.442076   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:21:49.442083   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:49.442092   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:49.442101   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:49.446494   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:49.452730   31154 system_pods.go:86] 17 kube-system pods found
	I1001 19:21:49.452758   31154 system_pods.go:89] "coredns-7c65d6cfc9-hd5hv" [31f0afff-5571-46d6-888f-8982c71ba191] Running
	I1001 19:21:49.452764   31154 system_pods.go:89] "coredns-7c65d6cfc9-v2wsx" [8e3dd318-5017-4ada-bf2f-61b640ee2030] Running
	I1001 19:21:49.452768   31154 system_pods.go:89] "etcd-ha-193737" [99c3674d-50d6-4160-89dd-afb3c2e71039] Running
	I1001 19:21:49.452772   31154 system_pods.go:89] "etcd-ha-193737-m02" [a541b9c8-c10e-45b4-ac9c-d16e8bf659b0] Running
	I1001 19:21:49.452775   31154 system_pods.go:89] "kindnet-drdlr" [13177890-a0eb-47ff-8fe2-585992810e47] Running
	I1001 19:21:49.452778   31154 system_pods.go:89] "kindnet-wnr6g" [89e11419-0c5c-486e-bdbf-eaf6fab1e62c] Running
	I1001 19:21:49.452781   31154 system_pods.go:89] "kube-apiserver-ha-193737" [432bf6a6-4607-4f55-a026-d910075d3145] Running
	I1001 19:21:49.452784   31154 system_pods.go:89] "kube-apiserver-ha-193737-m02" [379a24af-2148-4870-9f4b-25ad48943445] Running
	I1001 19:21:49.452788   31154 system_pods.go:89] "kube-controller-manager-ha-193737" [95fb44db-8cb3-4db9-8a84-ab82e15760de] Running
	I1001 19:21:49.452791   31154 system_pods.go:89] "kube-controller-manager-ha-193737-m02" [5b0821e0-673a-440c-8ca1-fefbfe913b94] Running
	I1001 19:21:49.452793   31154 system_pods.go:89] "kube-proxy-4294m" [f454ca0f-d662-4dfd-ab77-ebccaf3a6b12] Running
	I1001 19:21:49.452803   31154 system_pods.go:89] "kube-proxy-zpsll" [c18fec3c-2880-4860-b220-a44d5e523bed] Running
	I1001 19:21:49.452806   31154 system_pods.go:89] "kube-scheduler-ha-193737" [46ca2b37-6145-4111-9088-bcf51307be3a] Running
	I1001 19:21:49.452809   31154 system_pods.go:89] "kube-scheduler-ha-193737-m02" [72be6cd5-d226-46d6-b675-99014e544dfb] Running
	I1001 19:21:49.452812   31154 system_pods.go:89] "kube-vip-ha-193737" [cbe8e6a4-08f3-4db3-af4d-810a5592597c] Running
	I1001 19:21:49.452815   31154 system_pods.go:89] "kube-vip-ha-193737-m02" [da7dfa59-1b27-45ce-9e61-ad8da40ec548] Running
	I1001 19:21:49.452817   31154 system_pods.go:89] "storage-provisioner" [d5b587a6-418b-47e5-9bf7-3fb6fa5e3372] Running
	I1001 19:21:49.452823   31154 system_pods.go:126] duration metric: took 206.236353ms to wait for k8s-apps to be running ...
	I1001 19:21:49.452833   31154 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 19:21:49.452882   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 19:21:49.467775   31154 system_svc.go:56] duration metric: took 14.93254ms WaitForService to wait for kubelet
	I1001 19:21:49.467809   31154 kubeadm.go:582] duration metric: took 23.152617942s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 19:21:49.467833   31154 node_conditions.go:102] verifying NodePressure condition ...
	I1001 19:21:49.642303   31154 request.go:632] Waited for 174.372716ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes
	I1001 19:21:49.642352   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes
	I1001 19:21:49.642356   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:49.642364   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:49.642369   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:49.646440   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:49.647131   31154 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 19:21:49.647176   31154 node_conditions.go:123] node cpu capacity is 2
	I1001 19:21:49.647192   31154 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 19:21:49.647199   31154 node_conditions.go:123] node cpu capacity is 2
	I1001 19:21:49.647206   31154 node_conditions.go:105] duration metric: took 179.366973ms to run NodePressure ...
	I1001 19:21:49.647235   31154 start.go:241] waiting for startup goroutines ...
	I1001 19:21:49.647267   31154 start.go:255] writing updated cluster config ...
	I1001 19:21:49.649327   31154 out.go:201] 
	I1001 19:21:49.650621   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:21:49.650719   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:21:49.652065   31154 out.go:177] * Starting "ha-193737-m03" control-plane node in "ha-193737" cluster
	I1001 19:21:49.653048   31154 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:21:49.653076   31154 cache.go:56] Caching tarball of preloaded images
	I1001 19:21:49.653193   31154 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 19:21:49.653209   31154 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 19:21:49.653361   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:21:49.653640   31154 start.go:360] acquireMachinesLock for ha-193737-m03: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 19:21:49.653690   31154 start.go:364] duration metric: took 31.444µs to acquireMachinesLock for "ha-193737-m03"
	I1001 19:21:49.653709   31154 start.go:93] Provisioning new machine with config: &{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:21:49.653808   31154 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1001 19:21:49.655218   31154 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 19:21:49.655330   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:21:49.655375   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:21:49.671457   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36635
	I1001 19:21:49.672015   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:21:49.672579   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:21:49.672608   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:21:49.673005   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:21:49.673189   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetMachineName
	I1001 19:21:49.673372   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:21:49.673585   31154 start.go:159] libmachine.API.Create for "ha-193737" (driver="kvm2")
	I1001 19:21:49.673614   31154 client.go:168] LocalClient.Create starting
	I1001 19:21:49.673650   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem
	I1001 19:21:49.673691   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:21:49.673722   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:21:49.673797   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem
	I1001 19:21:49.673824   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:21:49.673838   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:21:49.673873   31154 main.go:141] libmachine: Running pre-create checks...
	I1001 19:21:49.673885   31154 main.go:141] libmachine: (ha-193737-m03) Calling .PreCreateCheck
	I1001 19:21:49.674030   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetConfigRaw
	I1001 19:21:49.674391   31154 main.go:141] libmachine: Creating machine...
	I1001 19:21:49.674405   31154 main.go:141] libmachine: (ha-193737-m03) Calling .Create
	I1001 19:21:49.674509   31154 main.go:141] libmachine: (ha-193737-m03) Creating KVM machine...
	I1001 19:21:49.675629   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found existing default KVM network
	I1001 19:21:49.675774   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found existing private KVM network mk-ha-193737
	I1001 19:21:49.675890   31154 main.go:141] libmachine: (ha-193737-m03) Setting up store path in /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03 ...
	I1001 19:21:49.675911   31154 main.go:141] libmachine: (ha-193737-m03) Building disk image from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 19:21:49.675957   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:49.675868   32386 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:21:49.676067   31154 main.go:141] libmachine: (ha-193737-m03) Downloading /home/jenkins/minikube-integration/19736-11198/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 19:21:49.919887   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:49.919775   32386 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa...
	I1001 19:21:50.197974   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:50.197797   32386 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/ha-193737-m03.rawdisk...
	I1001 19:21:50.198009   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Writing magic tar header
	I1001 19:21:50.198030   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Writing SSH key tar header
	I1001 19:21:50.198044   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03 (perms=drwx------)
	I1001 19:21:50.198058   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:50.197915   32386 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03 ...
	I1001 19:21:50.198069   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines (perms=drwxr-xr-x)
	I1001 19:21:50.198088   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube (perms=drwxr-xr-x)
	I1001 19:21:50.198099   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198 (perms=drwxrwxr-x)
	I1001 19:21:50.198109   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 19:21:50.198128   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 19:21:50.198141   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03
	I1001 19:21:50.198152   31154 main.go:141] libmachine: (ha-193737-m03) Creating domain...
	I1001 19:21:50.198180   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines
	I1001 19:21:50.198190   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:21:50.198206   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198
	I1001 19:21:50.198215   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 19:21:50.198224   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins
	I1001 19:21:50.198235   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home
	I1001 19:21:50.198248   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Skipping /home - not owner
	I1001 19:21:50.199136   31154 main.go:141] libmachine: (ha-193737-m03) define libvirt domain using xml: 
	I1001 19:21:50.199163   31154 main.go:141] libmachine: (ha-193737-m03) <domain type='kvm'>
	I1001 19:21:50.199174   31154 main.go:141] libmachine: (ha-193737-m03)   <name>ha-193737-m03</name>
	I1001 19:21:50.199182   31154 main.go:141] libmachine: (ha-193737-m03)   <memory unit='MiB'>2200</memory>
	I1001 19:21:50.199192   31154 main.go:141] libmachine: (ha-193737-m03)   <vcpu>2</vcpu>
	I1001 19:21:50.199198   31154 main.go:141] libmachine: (ha-193737-m03)   <features>
	I1001 19:21:50.199207   31154 main.go:141] libmachine: (ha-193737-m03)     <acpi/>
	I1001 19:21:50.199216   31154 main.go:141] libmachine: (ha-193737-m03)     <apic/>
	I1001 19:21:50.199226   31154 main.go:141] libmachine: (ha-193737-m03)     <pae/>
	I1001 19:21:50.199234   31154 main.go:141] libmachine: (ha-193737-m03)     
	I1001 19:21:50.199241   31154 main.go:141] libmachine: (ha-193737-m03)   </features>
	I1001 19:21:50.199248   31154 main.go:141] libmachine: (ha-193737-m03)   <cpu mode='host-passthrough'>
	I1001 19:21:50.199270   31154 main.go:141] libmachine: (ha-193737-m03)   
	I1001 19:21:50.199286   31154 main.go:141] libmachine: (ha-193737-m03)   </cpu>
	I1001 19:21:50.199295   31154 main.go:141] libmachine: (ha-193737-m03)   <os>
	I1001 19:21:50.199303   31154 main.go:141] libmachine: (ha-193737-m03)     <type>hvm</type>
	I1001 19:21:50.199315   31154 main.go:141] libmachine: (ha-193737-m03)     <boot dev='cdrom'/>
	I1001 19:21:50.199323   31154 main.go:141] libmachine: (ha-193737-m03)     <boot dev='hd'/>
	I1001 19:21:50.199334   31154 main.go:141] libmachine: (ha-193737-m03)     <bootmenu enable='no'/>
	I1001 19:21:50.199343   31154 main.go:141] libmachine: (ha-193737-m03)   </os>
	I1001 19:21:50.199352   31154 main.go:141] libmachine: (ha-193737-m03)   <devices>
	I1001 19:21:50.199367   31154 main.go:141] libmachine: (ha-193737-m03)     <disk type='file' device='cdrom'>
	I1001 19:21:50.199383   31154 main.go:141] libmachine: (ha-193737-m03)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/boot2docker.iso'/>
	I1001 19:21:50.199394   31154 main.go:141] libmachine: (ha-193737-m03)       <target dev='hdc' bus='scsi'/>
	I1001 19:21:50.199404   31154 main.go:141] libmachine: (ha-193737-m03)       <readonly/>
	I1001 19:21:50.199413   31154 main.go:141] libmachine: (ha-193737-m03)     </disk>
	I1001 19:21:50.199425   31154 main.go:141] libmachine: (ha-193737-m03)     <disk type='file' device='disk'>
	I1001 19:21:50.199441   31154 main.go:141] libmachine: (ha-193737-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 19:21:50.199458   31154 main.go:141] libmachine: (ha-193737-m03)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/ha-193737-m03.rawdisk'/>
	I1001 19:21:50.199468   31154 main.go:141] libmachine: (ha-193737-m03)       <target dev='hda' bus='virtio'/>
	I1001 19:21:50.199477   31154 main.go:141] libmachine: (ha-193737-m03)     </disk>
	I1001 19:21:50.199486   31154 main.go:141] libmachine: (ha-193737-m03)     <interface type='network'>
	I1001 19:21:50.199495   31154 main.go:141] libmachine: (ha-193737-m03)       <source network='mk-ha-193737'/>
	I1001 19:21:50.199503   31154 main.go:141] libmachine: (ha-193737-m03)       <model type='virtio'/>
	I1001 19:21:50.199531   31154 main.go:141] libmachine: (ha-193737-m03)     </interface>
	I1001 19:21:50.199562   31154 main.go:141] libmachine: (ha-193737-m03)     <interface type='network'>
	I1001 19:21:50.199576   31154 main.go:141] libmachine: (ha-193737-m03)       <source network='default'/>
	I1001 19:21:50.199588   31154 main.go:141] libmachine: (ha-193737-m03)       <model type='virtio'/>
	I1001 19:21:50.199599   31154 main.go:141] libmachine: (ha-193737-m03)     </interface>
	I1001 19:21:50.199608   31154 main.go:141] libmachine: (ha-193737-m03)     <serial type='pty'>
	I1001 19:21:50.199619   31154 main.go:141] libmachine: (ha-193737-m03)       <target port='0'/>
	I1001 19:21:50.199627   31154 main.go:141] libmachine: (ha-193737-m03)     </serial>
	I1001 19:21:50.199662   31154 main.go:141] libmachine: (ha-193737-m03)     <console type='pty'>
	I1001 19:21:50.199708   31154 main.go:141] libmachine: (ha-193737-m03)       <target type='serial' port='0'/>
	I1001 19:21:50.199726   31154 main.go:141] libmachine: (ha-193737-m03)     </console>
	I1001 19:21:50.199748   31154 main.go:141] libmachine: (ha-193737-m03)     <rng model='virtio'>
	I1001 19:21:50.199767   31154 main.go:141] libmachine: (ha-193737-m03)       <backend model='random'>/dev/random</backend>
	I1001 19:21:50.199780   31154 main.go:141] libmachine: (ha-193737-m03)     </rng>
	I1001 19:21:50.199794   31154 main.go:141] libmachine: (ha-193737-m03)     
	I1001 19:21:50.199803   31154 main.go:141] libmachine: (ha-193737-m03)     
	I1001 19:21:50.199814   31154 main.go:141] libmachine: (ha-193737-m03)   </devices>
	I1001 19:21:50.199820   31154 main.go:141] libmachine: (ha-193737-m03) </domain>
	I1001 19:21:50.199837   31154 main.go:141] libmachine: (ha-193737-m03) 
	I1001 19:21:50.206580   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:8b:a8:e7 in network default
	I1001 19:21:50.207376   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:50.207405   31154 main.go:141] libmachine: (ha-193737-m03) Ensuring networks are active...
	I1001 19:21:50.208168   31154 main.go:141] libmachine: (ha-193737-m03) Ensuring network default is active
	I1001 19:21:50.208498   31154 main.go:141] libmachine: (ha-193737-m03) Ensuring network mk-ha-193737 is active
	I1001 19:21:50.208873   31154 main.go:141] libmachine: (ha-193737-m03) Getting domain xml...
	I1001 19:21:50.209740   31154 main.go:141] libmachine: (ha-193737-m03) Creating domain...
	I1001 19:21:51.487699   31154 main.go:141] libmachine: (ha-193737-m03) Waiting to get IP...
	I1001 19:21:51.488558   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:51.488971   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:51.488988   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:51.488956   32386 retry.go:31] will retry after 292.057466ms: waiting for machine to come up
	I1001 19:21:51.782677   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:51.783145   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:51.783197   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:51.783106   32386 retry.go:31] will retry after 354.701551ms: waiting for machine to come up
	I1001 19:21:52.139803   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:52.140295   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:52.140322   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:52.140239   32386 retry.go:31] will retry after 363.996754ms: waiting for machine to come up
	I1001 19:21:52.505881   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:52.506427   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:52.506447   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:52.506386   32386 retry.go:31] will retry after 414.43192ms: waiting for machine to come up
	I1001 19:21:52.922204   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:52.922737   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:52.922766   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:52.922724   32386 retry.go:31] will retry after 579.407554ms: waiting for machine to come up
	I1001 19:21:53.503613   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:53.504058   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:53.504085   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:53.504000   32386 retry.go:31] will retry after 721.311664ms: waiting for machine to come up
	I1001 19:21:54.227110   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:54.227610   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:54.227655   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:54.227567   32386 retry.go:31] will retry after 1.130708111s: waiting for machine to come up
	I1001 19:21:55.360491   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:55.360900   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:55.360926   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:55.360870   32386 retry.go:31] will retry after 1.468803938s: waiting for machine to come up
	I1001 19:21:56.831225   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:56.831722   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:56.831750   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:56.831677   32386 retry.go:31] will retry after 1.742550848s: waiting for machine to come up
	I1001 19:21:58.576460   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:58.576859   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:58.576883   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:58.576823   32386 retry.go:31] will retry after 1.623668695s: waiting for machine to come up
	I1001 19:22:00.201759   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:00.202340   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:22:00.202361   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:22:00.202290   32386 retry.go:31] will retry after 1.997667198s: waiting for machine to come up
	I1001 19:22:02.201433   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:02.201901   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:22:02.201917   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:22:02.201868   32386 retry.go:31] will retry after 2.886327611s: waiting for machine to come up
	I1001 19:22:05.090402   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:05.090907   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:22:05.090933   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:22:05.090844   32386 retry.go:31] will retry after 3.87427099s: waiting for machine to come up
	I1001 19:22:08.966290   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:08.966719   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:22:08.966754   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:22:08.966674   32386 retry.go:31] will retry after 4.039315752s: waiting for machine to come up
	I1001 19:22:13.009358   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.009842   31154 main.go:141] libmachine: (ha-193737-m03) Found IP for machine: 192.168.39.101
	I1001 19:22:13.009868   31154 main.go:141] libmachine: (ha-193737-m03) Reserving static IP address...
	I1001 19:22:13.009881   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has current primary IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.010863   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find host DHCP lease matching {name: "ha-193737-m03", mac: "52:54:00:9e:b9:5c", ip: "192.168.39.101"} in network mk-ha-193737
	I1001 19:22:13.088968   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Getting to WaitForSSH function...
	I1001 19:22:13.088993   31154 main.go:141] libmachine: (ha-193737-m03) Reserved static IP address: 192.168.39.101
	I1001 19:22:13.089006   31154 main.go:141] libmachine: (ha-193737-m03) Waiting for SSH to be available...
	I1001 19:22:13.091870   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.092415   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.092449   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.092644   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Using SSH client type: external
	I1001 19:22:13.092667   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa (-rw-------)
	I1001 19:22:13.092694   31154 main.go:141] libmachine: (ha-193737-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 19:22:13.092712   31154 main.go:141] libmachine: (ha-193737-m03) DBG | About to run SSH command:
	I1001 19:22:13.092731   31154 main.go:141] libmachine: (ha-193737-m03) DBG | exit 0
	I1001 19:22:13.220534   31154 main.go:141] libmachine: (ha-193737-m03) DBG | SSH cmd err, output: <nil>: 
	I1001 19:22:13.220779   31154 main.go:141] libmachine: (ha-193737-m03) KVM machine creation complete!
	I1001 19:22:13.221074   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetConfigRaw
	I1001 19:22:13.221579   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:13.221804   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:13.221984   31154 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 19:22:13.222002   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetState
	I1001 19:22:13.223279   31154 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 19:22:13.223293   31154 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 19:22:13.223299   31154 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 19:22:13.223305   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.225923   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.226398   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.226416   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.226678   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:13.226887   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.227052   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.227186   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:13.227368   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:13.227559   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:13.227571   31154 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 19:22:13.332328   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:22:13.332352   31154 main.go:141] libmachine: Detecting the provisioner...
	I1001 19:22:13.332384   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.335169   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.335569   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.335603   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.335764   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:13.336042   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.336239   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.336386   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:13.336591   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:13.336771   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:13.336783   31154 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 19:22:13.445518   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 19:22:13.445586   31154 main.go:141] libmachine: found compatible host: buildroot
	I1001 19:22:13.445594   31154 main.go:141] libmachine: Provisioning with buildroot...
	I1001 19:22:13.445601   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetMachineName
	I1001 19:22:13.445821   31154 buildroot.go:166] provisioning hostname "ha-193737-m03"
	I1001 19:22:13.445847   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetMachineName
	I1001 19:22:13.446042   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.449433   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.449860   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.449897   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.450180   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:13.450368   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.450566   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.450713   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:13.450881   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:13.451039   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:13.451051   31154 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-193737-m03 && echo "ha-193737-m03" | sudo tee /etc/hostname
	I1001 19:22:13.572777   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-193737-m03
	
	I1001 19:22:13.572810   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.575494   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.575835   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.575859   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.576047   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:13.576235   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.576419   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.576571   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:13.576759   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:13.576956   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:13.576973   31154 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-193737-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-193737-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-193737-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 19:22:13.689983   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:22:13.690015   31154 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 19:22:13.690038   31154 buildroot.go:174] setting up certificates
	I1001 19:22:13.690050   31154 provision.go:84] configureAuth start
	I1001 19:22:13.690066   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetMachineName
	I1001 19:22:13.690369   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetIP
	I1001 19:22:13.693242   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.693664   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.693693   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.693840   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.696141   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.696495   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.696524   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.696638   31154 provision.go:143] copyHostCerts
	I1001 19:22:13.696676   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:22:13.696720   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 19:22:13.696731   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:22:13.696821   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 19:22:13.696919   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:22:13.696949   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 19:22:13.696960   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:22:13.697003   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 19:22:13.697067   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:22:13.697091   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 19:22:13.697100   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:22:13.697136   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 19:22:13.697206   31154 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.ha-193737-m03 san=[127.0.0.1 192.168.39.101 ha-193737-m03 localhost minikube]
	I1001 19:22:13.877573   31154 provision.go:177] copyRemoteCerts
	I1001 19:22:13.877625   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 19:22:13.877649   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.880678   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.880932   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.880970   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.881176   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:13.881406   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.881587   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:13.881804   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa Username:docker}
	I1001 19:22:13.962987   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 19:22:13.963068   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 19:22:13.986966   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 19:22:13.987070   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 19:22:14.013722   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 19:22:14.013794   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 19:22:14.037854   31154 provision.go:87] duration metric: took 347.788312ms to configureAuth
	I1001 19:22:14.037883   31154 buildroot.go:189] setting minikube options for container-runtime
	I1001 19:22:14.038135   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:22:14.038209   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:14.040944   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.041372   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.041401   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.041587   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:14.041771   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.041906   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.042003   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:14.042139   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:14.042328   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:14.042345   31154 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 19:22:14.262634   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 19:22:14.262673   31154 main.go:141] libmachine: Checking connection to Docker...
	I1001 19:22:14.262687   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetURL
	I1001 19:22:14.263998   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Using libvirt version 6000000
	I1001 19:22:14.266567   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.266926   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.266955   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.267154   31154 main.go:141] libmachine: Docker is up and running!
	I1001 19:22:14.267166   31154 main.go:141] libmachine: Reticulating splines...
	I1001 19:22:14.267173   31154 client.go:171] duration metric: took 24.593551771s to LocalClient.Create
	I1001 19:22:14.267196   31154 start.go:167] duration metric: took 24.593612564s to libmachine.API.Create "ha-193737"
	I1001 19:22:14.267205   31154 start.go:293] postStartSetup for "ha-193737-m03" (driver="kvm2")
	I1001 19:22:14.267214   31154 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 19:22:14.267240   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:14.267459   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 19:22:14.267484   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:14.269571   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.269977   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.270004   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.270121   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:14.270292   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.270427   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:14.270551   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa Username:docker}
	I1001 19:22:14.350988   31154 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 19:22:14.355823   31154 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 19:22:14.355848   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 19:22:14.355915   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 19:22:14.355986   31154 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 19:22:14.355994   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /etc/ssl/certs/184302.pem
	I1001 19:22:14.356070   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 19:22:14.366040   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:22:14.390055   31154 start.go:296] duration metric: took 122.835456ms for postStartSetup
	I1001 19:22:14.390108   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetConfigRaw
	I1001 19:22:14.390696   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetIP
	I1001 19:22:14.394065   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.394508   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.394536   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.394910   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:22:14.395150   31154 start.go:128] duration metric: took 24.741329773s to createHost
	I1001 19:22:14.395182   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:14.397581   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.397994   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.398017   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.398188   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:14.398403   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.398574   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.398727   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:14.398880   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:14.399094   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:14.399111   31154 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 19:22:14.505599   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727810534.482085733
	
	I1001 19:22:14.505628   31154 fix.go:216] guest clock: 1727810534.482085733
	I1001 19:22:14.505639   31154 fix.go:229] Guest: 2024-10-01 19:22:14.482085733 +0000 UTC Remote: 2024-10-01 19:22:14.395166889 +0000 UTC m=+146.623005707 (delta=86.918844ms)
	I1001 19:22:14.505658   31154 fix.go:200] guest clock delta is within tolerance: 86.918844ms
	I1001 19:22:14.505664   31154 start.go:83] releasing machines lock for "ha-193737-m03", held for 24.851963464s
	I1001 19:22:14.505684   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:14.505908   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetIP
	I1001 19:22:14.508696   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.509064   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.509086   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.511117   31154 out.go:177] * Found network options:
	I1001 19:22:14.512450   31154 out.go:177]   - NO_PROXY=192.168.39.14,192.168.39.27
	W1001 19:22:14.513603   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	W1001 19:22:14.513632   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 19:22:14.513653   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:14.514254   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:14.514460   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:14.514553   31154 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 19:22:14.514592   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	W1001 19:22:14.514627   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	W1001 19:22:14.514652   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 19:22:14.514726   31154 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 19:22:14.514748   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:14.517511   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.517716   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.517872   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.517897   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.518069   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:14.518071   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.518151   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.518298   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:14.518302   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.518474   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:14.518512   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.518613   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:14.518617   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa Username:docker}
	I1001 19:22:14.518740   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa Username:docker}
	I1001 19:22:14.749140   31154 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 19:22:14.755011   31154 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 19:22:14.755083   31154 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 19:22:14.772351   31154 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 19:22:14.772388   31154 start.go:495] detecting cgroup driver to use...
	I1001 19:22:14.772457   31154 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 19:22:14.789303   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 19:22:14.804840   31154 docker.go:217] disabling cri-docker service (if available) ...
	I1001 19:22:14.804906   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 19:22:14.819518   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 19:22:14.834095   31154 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 19:22:14.944783   31154 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 19:22:15.079717   31154 docker.go:233] disabling docker service ...
	I1001 19:22:15.079790   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 19:22:15.095162   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 19:22:15.107998   31154 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 19:22:15.243729   31154 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 19:22:15.377225   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 19:22:15.391343   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 19:22:15.411068   31154 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 19:22:15.411143   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.423227   31154 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 19:22:15.423294   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.434691   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.446242   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.457352   31154 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 19:22:15.469147   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.479924   31154 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.497221   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.507678   31154 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 19:22:15.517482   31154 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 19:22:15.517554   31154 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 19:22:15.532214   31154 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 19:22:15.541788   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:22:15.665094   31154 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 19:22:15.757492   31154 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 19:22:15.757569   31154 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 19:22:15.762004   31154 start.go:563] Will wait 60s for crictl version
	I1001 19:22:15.762063   31154 ssh_runner.go:195] Run: which crictl
	I1001 19:22:15.766039   31154 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 19:22:15.802516   31154 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 19:22:15.802600   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:22:15.831926   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:22:15.862187   31154 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 19:22:15.863552   31154 out.go:177]   - env NO_PROXY=192.168.39.14
	I1001 19:22:15.864903   31154 out.go:177]   - env NO_PROXY=192.168.39.14,192.168.39.27
	I1001 19:22:15.866357   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetIP
	I1001 19:22:15.868791   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:15.869113   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:15.869142   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:15.869293   31154 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 19:22:15.873237   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:22:15.885293   31154 mustload.go:65] Loading cluster: ha-193737
	I1001 19:22:15.885514   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:22:15.885795   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:22:15.885838   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:22:15.901055   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45767
	I1001 19:22:15.901633   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:22:15.902627   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:22:15.902658   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:22:15.903034   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:22:15.903198   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:22:15.905017   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:22:15.905429   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:22:15.905488   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:22:15.921741   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I1001 19:22:15.922203   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:22:15.923200   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:22:15.923220   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:22:15.923541   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:22:15.923744   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:22:15.923907   31154 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737 for IP: 192.168.39.101
	I1001 19:22:15.923919   31154 certs.go:194] generating shared ca certs ...
	I1001 19:22:15.923941   31154 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:22:15.924081   31154 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 19:22:15.924118   31154 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 19:22:15.924126   31154 certs.go:256] generating profile certs ...
	I1001 19:22:15.924217   31154 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key
	I1001 19:22:15.924242   31154 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.09da423f
	I1001 19:22:15.924256   31154 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.09da423f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.14 192.168.39.27 192.168.39.101 192.168.39.254]
	I1001 19:22:16.102464   31154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.09da423f ...
	I1001 19:22:16.102493   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.09da423f: {Name:mk41b913f57e7f10c713b2e18136c742f7b09ec4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:22:16.102655   31154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.09da423f ...
	I1001 19:22:16.102668   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.09da423f: {Name:mkaf44cea34e6bfbac4ea8c8d70ebec43d2a6d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:22:16.102739   31154 certs.go:381] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.09da423f -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt
	I1001 19:22:16.102870   31154 certs.go:385] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.09da423f -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key
	I1001 19:22:16.102988   31154 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key
	I1001 19:22:16.103003   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 19:22:16.103016   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 19:22:16.103030   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 19:22:16.103042   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 19:22:16.103054   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 19:22:16.103067   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 19:22:16.103081   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 19:22:16.120441   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 19:22:16.120535   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 19:22:16.120569   31154 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 19:22:16.120579   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 19:22:16.120602   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 19:22:16.120624   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 19:22:16.120682   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 19:22:16.120730   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:22:16.120759   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /usr/share/ca-certificates/184302.pem
	I1001 19:22:16.120772   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:22:16.120784   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem -> /usr/share/ca-certificates/18430.pem
	I1001 19:22:16.120814   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:22:16.123512   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:22:16.123983   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:22:16.124012   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:22:16.124198   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:22:16.124425   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:22:16.124611   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:22:16.124747   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:22:16.196684   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1001 19:22:16.201293   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1001 19:22:16.211163   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1001 19:22:16.215061   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1001 19:22:16.225018   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1001 19:22:16.228909   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1001 19:22:16.239430   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1001 19:22:16.243222   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1001 19:22:16.253163   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1001 19:22:16.256929   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1001 19:22:16.266378   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1001 19:22:16.270062   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1001 19:22:16.278964   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 19:22:16.303288   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 19:22:16.326243   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 19:22:16.347460   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 19:22:16.372037   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1001 19:22:16.396287   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 19:22:16.420724   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 19:22:16.445707   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 19:22:16.468539   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 19:22:16.492971   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 19:22:16.517838   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 19:22:16.541960   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1001 19:22:16.557831   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1001 19:22:16.573594   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1001 19:22:16.590168   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1001 19:22:16.607168   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1001 19:22:16.623957   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1001 19:22:16.640438   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1001 19:22:16.655967   31154 ssh_runner.go:195] Run: openssl version
	I1001 19:22:16.661524   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 19:22:16.672376   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 19:22:16.676864   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 19:22:16.676922   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 19:22:16.682647   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 19:22:16.693083   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 19:22:16.703938   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:22:16.708263   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:22:16.708320   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:22:16.714520   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 19:22:16.725249   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 19:22:16.736315   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 19:22:16.741061   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 19:22:16.741120   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 19:22:16.746697   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 19:22:16.757551   31154 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 19:22:16.761481   31154 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 19:22:16.761539   31154 kubeadm.go:934] updating node {m03 192.168.39.101 8443 v1.31.1 crio true true} ...
	I1001 19:22:16.761636   31154 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-193737-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 19:22:16.761666   31154 kube-vip.go:115] generating kube-vip config ...
	I1001 19:22:16.761704   31154 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 19:22:16.778682   31154 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 19:22:16.778755   31154 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1001 19:22:16.778825   31154 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 19:22:16.788174   31154 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1001 19:22:16.788258   31154 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1001 19:22:16.797330   31154 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1001 19:22:16.797360   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 19:22:16.797405   31154 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1001 19:22:16.797420   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 19:22:16.797425   31154 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1001 19:22:16.797452   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 19:22:16.797455   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 19:22:16.797515   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 19:22:16.806983   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1001 19:22:16.807016   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1001 19:22:16.807033   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1001 19:22:16.807064   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1001 19:22:16.822346   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 19:22:16.822450   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 19:22:16.908222   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1001 19:22:16.908266   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1001 19:22:17.718151   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1001 19:22:17.728679   31154 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1001 19:22:17.753493   31154 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 19:22:17.773315   31154 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1001 19:22:17.791404   31154 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 19:22:17.795599   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:22:17.808083   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:22:17.928195   31154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:22:17.944678   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:22:17.945052   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:22:17.945093   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:22:17.962020   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45629
	I1001 19:22:17.962474   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:22:17.962912   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:22:17.962940   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:22:17.963311   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:22:17.963520   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:22:17.963697   31154 start.go:317] joinCluster: &{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:22:17.963861   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1001 19:22:17.963886   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:22:17.967232   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:22:17.967827   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:22:17.967856   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:22:17.968135   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:22:17.968336   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:22:17.968495   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:22:17.968659   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:22:18.133596   31154 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:22:18.133651   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z7cdmg.hjk7kyt30ndw2tea --discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-193737-m03 --control-plane --apiserver-advertise-address=192.168.39.101 --apiserver-bind-port=8443"
	I1001 19:22:41.859086   31154 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z7cdmg.hjk7kyt30ndw2tea --discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-193737-m03 --control-plane --apiserver-advertise-address=192.168.39.101 --apiserver-bind-port=8443": (23.725407283s)
	I1001 19:22:41.859128   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1001 19:22:42.384071   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-193737-m03 minikube.k8s.io/updated_at=2024_10_01T19_22_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=ha-193737 minikube.k8s.io/primary=false
	I1001 19:22:42.510669   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-193737-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1001 19:22:42.641492   31154 start.go:319] duration metric: took 24.67779185s to joinCluster
	I1001 19:22:42.641581   31154 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:22:42.641937   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:22:42.642770   31154 out.go:177] * Verifying Kubernetes components...
	I1001 19:22:42.643798   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:22:42.883720   31154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:22:42.899372   31154 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:22:42.899626   31154 kapi.go:59] client config for ha-193737: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key", CAFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1001 19:22:42.899683   31154 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.14:8443
	I1001 19:22:42.899959   31154 node_ready.go:35] waiting up to 6m0s for node "ha-193737-m03" to be "Ready" ...
	I1001 19:22:42.900040   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:42.900052   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:42.900063   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:42.900071   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:42.904647   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:22:43.401126   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:43.401152   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:43.401163   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:43.401168   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:43.405027   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:43.900824   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:43.900848   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:43.900859   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:43.900868   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:43.904531   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:44.400251   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:44.400272   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:44.400281   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:44.400285   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:44.403517   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:44.901001   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:44.901028   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:44.901036   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:44.901041   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:44.905012   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:44.905575   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:45.400898   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:45.400924   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:45.400935   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:45.400942   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:45.405202   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:22:45.900749   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:45.900772   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:45.900781   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:45.900785   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:45.904505   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:46.400832   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:46.400855   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:46.400865   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:46.400871   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:46.404455   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:46.900834   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:46.900926   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:46.900945   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:46.900955   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:46.907848   31154 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1001 19:22:46.909060   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:47.400619   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:47.400639   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:47.400647   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:47.400651   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:47.404519   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:47.900808   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:47.900835   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:47.900846   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:47.900851   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:48.028121   31154 round_trippers.go:574] Response Status: 200 OK in 127 milliseconds
	I1001 19:22:48.400839   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:48.400859   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:48.400866   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:48.400870   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:48.404198   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:48.900508   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:48.900533   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:48.900544   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:48.900551   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:48.904379   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:49.400836   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:49.400857   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:49.400866   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:49.400870   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:49.403736   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:49.404256   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:49.901034   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:49.901058   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:49.901068   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:49.901073   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:49.905378   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:22:50.400178   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:50.400198   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:50.400206   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:50.400214   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:50.403269   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:50.901215   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:50.901242   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:50.901251   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:50.901256   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:50.905409   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:22:51.400867   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:51.400890   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:51.400899   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:51.400908   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:51.404516   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:51.404962   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:51.900265   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:51.900308   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:51.900315   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:51.900319   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:51.903634   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:52.401178   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:52.401200   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:52.401206   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:52.401211   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:52.404511   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:52.900412   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:52.900432   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:52.900441   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:52.900446   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:52.903570   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:53.400572   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:53.400602   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:53.400614   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:53.400622   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:53.403821   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:53.900178   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:53.900201   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:53.900210   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:53.900214   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:53.903933   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:53.904621   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:54.401040   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:54.401066   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:54.401078   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:54.401085   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:54.404732   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:54.901129   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:54.901154   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:54.901163   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:54.901166   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:54.904547   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:55.400669   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:55.400692   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:55.400700   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:55.400703   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:55.404556   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:55.900944   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:55.900966   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:55.900974   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:55.900977   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:55.904209   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:55.904851   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:56.400513   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:56.400537   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:56.400548   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:56.400554   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:56.403671   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:56.900541   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:56.900564   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:56.900575   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:56.900582   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:56.903726   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:57.400178   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:57.400200   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:57.400209   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:57.400216   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:57.403658   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:57.901131   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:57.901154   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:57.901163   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:57.901169   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:57.904387   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:58.401066   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:58.401087   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:58.401095   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:58.401098   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:58.404875   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:58.405329   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:58.900140   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:58.900160   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:58.900168   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:58.900172   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:58.903081   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.401118   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:59.401143   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.401153   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.401156   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.404480   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:59.405079   31154 node_ready.go:49] node "ha-193737-m03" has status "Ready":"True"
	I1001 19:22:59.405100   31154 node_ready.go:38] duration metric: took 16.505122802s for node "ha-193737-m03" to be "Ready" ...
	I1001 19:22:59.405110   31154 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 19:22:59.405190   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:22:59.405207   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.405217   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.405227   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.412572   31154 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1001 19:22:59.420220   31154 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.420321   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hd5hv
	I1001 19:22:59.420334   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.420345   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.420353   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.423179   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.423949   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:22:59.423964   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.423970   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.423975   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.426304   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.426762   31154 pod_ready.go:93] pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace has status "Ready":"True"
	I1001 19:22:59.426780   31154 pod_ready.go:82] duration metric: took 6.530664ms for pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.426796   31154 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.426857   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-v2wsx
	I1001 19:22:59.426866   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.426876   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.426887   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.429141   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.429823   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:22:59.429840   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.429848   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.429852   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.431860   31154 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 19:22:59.432333   31154 pod_ready.go:93] pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace has status "Ready":"True"
	I1001 19:22:59.432348   31154 pod_ready.go:82] duration metric: took 5.544704ms for pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.432374   31154 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.432437   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737
	I1001 19:22:59.432448   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.432456   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.432459   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.434479   31154 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 19:22:59.435042   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:22:59.435057   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.435063   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.435067   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.437217   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.437787   31154 pod_ready.go:93] pod "etcd-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:22:59.437803   31154 pod_ready.go:82] duration metric: took 5.420394ms for pod "etcd-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.437813   31154 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.437864   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737-m02
	I1001 19:22:59.437874   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.437883   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.437892   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.440631   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.441277   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:22:59.441295   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.441316   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.441325   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.448195   31154 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1001 19:22:59.448905   31154 pod_ready.go:93] pod "etcd-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:22:59.448925   31154 pod_ready.go:82] duration metric: took 11.104591ms for pod "etcd-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.448938   31154 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.601259   31154 request.go:632] Waited for 152.231969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737-m03
	I1001 19:22:59.601316   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737-m03
	I1001 19:22:59.601321   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.601329   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.601333   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.604878   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:59.801921   31154 request.go:632] Waited for 196.382761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:59.802008   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:59.802021   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.802031   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.802037   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.805203   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:59.806083   31154 pod_ready.go:93] pod "etcd-ha-193737-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 19:22:59.806103   31154 pod_ready.go:82] duration metric: took 357.156614ms for pod "etcd-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.806134   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:00.001202   31154 request.go:632] Waited for 194.974996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737
	I1001 19:23:00.001255   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737
	I1001 19:23:00.001260   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:00.001267   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:00.001271   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:00.005307   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:23:00.201989   31154 request.go:632] Waited for 195.321685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:00.202114   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:00.202132   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:00.202146   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:00.202158   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:00.205788   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:00.206508   31154 pod_ready.go:93] pod "kube-apiserver-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:00.206529   31154 pod_ready.go:82] duration metric: took 400.381151ms for pod "kube-apiserver-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:00.206541   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:00.401602   31154 request.go:632] Waited for 194.993098ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m02
	I1001 19:23:00.401663   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m02
	I1001 19:23:00.401668   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:00.401676   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:00.401680   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:00.405450   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:00.601599   31154 request.go:632] Waited for 195.316962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:00.601692   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:00.601700   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:00.601707   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:00.601711   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:00.605188   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:00.605660   31154 pod_ready.go:93] pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:00.605679   31154 pod_ready.go:82] duration metric: took 399.130829ms for pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:00.605688   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:00.801836   31154 request.go:632] Waited for 196.081559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m03
	I1001 19:23:00.801903   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m03
	I1001 19:23:00.801908   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:00.801926   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:00.801931   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:00.805500   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.001996   31154 request.go:632] Waited for 195.706291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:01.002060   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:01.002068   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:01.002082   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:01.002090   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:01.005674   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.006438   31154 pod_ready.go:93] pod "kube-apiserver-ha-193737-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:01.006466   31154 pod_ready.go:82] duration metric: took 400.769669ms for pod "kube-apiserver-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:01.006480   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:01.201564   31154 request.go:632] Waited for 195.007953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737
	I1001 19:23:01.201618   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737
	I1001 19:23:01.201623   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:01.201630   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:01.201634   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:01.204998   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.402159   31154 request.go:632] Waited for 196.410696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:01.402225   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:01.402232   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:01.402243   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:01.402250   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:01.405639   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.406259   31154 pod_ready.go:93] pod "kube-controller-manager-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:01.406284   31154 pod_ready.go:82] duration metric: took 399.796485ms for pod "kube-controller-manager-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:01.406298   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:01.601556   31154 request.go:632] Waited for 195.171182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m02
	I1001 19:23:01.601629   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m02
	I1001 19:23:01.601638   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:01.601646   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:01.601655   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:01.605271   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.801581   31154 request.go:632] Waited for 195.404456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:01.801644   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:01.801651   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:01.801662   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:01.801669   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:01.805042   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.805673   31154 pod_ready.go:93] pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:01.805694   31154 pod_ready.go:82] duration metric: took 399.387622ms for pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:01.805707   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:02.001904   31154 request.go:632] Waited for 195.994245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m03
	I1001 19:23:02.002040   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m03
	I1001 19:23:02.002064   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:02.002075   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:02.002080   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:02.005612   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:02.201553   31154 request.go:632] Waited for 195.185972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:02.201606   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:02.201612   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:02.201628   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:02.201645   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:02.205018   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:02.205533   31154 pod_ready.go:93] pod "kube-controller-manager-ha-193737-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:02.205552   31154 pod_ready.go:82] duration metric: took 399.838551ms for pod "kube-controller-manager-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:02.205563   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4294m" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:02.401983   31154 request.go:632] Waited for 196.357491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4294m
	I1001 19:23:02.402038   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4294m
	I1001 19:23:02.402043   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:02.402049   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:02.402054   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:02.405225   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:02.601208   31154 request.go:632] Waited for 195.289332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:02.601293   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:02.601304   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:02.601316   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:02.601328   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:02.604768   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:02.605212   31154 pod_ready.go:93] pod "kube-proxy-4294m" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:02.605230   31154 pod_ready.go:82] duration metric: took 399.66052ms for pod "kube-proxy-4294m" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:02.605242   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9pm4t" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:02.801359   31154 request.go:632] Waited for 196.035084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9pm4t
	I1001 19:23:02.801440   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9pm4t
	I1001 19:23:02.801448   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:02.801462   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:02.801473   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:02.804772   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:03.001444   31154 request.go:632] Waited for 196.042411ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:03.001517   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:03.001522   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:03.001536   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:03.001543   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:03.005199   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:03.005738   31154 pod_ready.go:93] pod "kube-proxy-9pm4t" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:03.005763   31154 pod_ready.go:82] duration metric: took 400.510951ms for pod "kube-proxy-9pm4t" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:03.005773   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zpsll" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:03.201543   31154 request.go:632] Waited for 195.704518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zpsll
	I1001 19:23:03.201618   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zpsll
	I1001 19:23:03.201627   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:03.201634   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:03.201639   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:03.204535   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:23:03.401528   31154 request.go:632] Waited for 196.292025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:03.401585   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:03.401590   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:03.401597   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:03.401602   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:03.405338   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:03.406008   31154 pod_ready.go:93] pod "kube-proxy-zpsll" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:03.406025   31154 pod_ready.go:82] duration metric: took 400.246215ms for pod "kube-proxy-zpsll" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:03.406035   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:03.601668   31154 request.go:632] Waited for 195.548834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737
	I1001 19:23:03.601752   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737
	I1001 19:23:03.601760   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:03.601772   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:03.601779   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:03.605345   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:03.801308   31154 request.go:632] Waited for 195.294104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:03.801403   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:03.801417   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:03.801427   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:03.801434   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:03.804468   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:03.805276   31154 pod_ready.go:93] pod "kube-scheduler-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:03.805293   31154 pod_ready.go:82] duration metric: took 399.251767ms for pod "kube-scheduler-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:03.805303   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:04.001445   31154 request.go:632] Waited for 196.067713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m02
	I1001 19:23:04.001522   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m02
	I1001 19:23:04.001531   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.001541   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.001548   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.004705   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:04.201792   31154 request.go:632] Waited for 196.362451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:04.201872   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:04.201879   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.201889   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.201897   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.205376   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:04.206212   31154 pod_ready.go:93] pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:04.206235   31154 pod_ready.go:82] duration metric: took 400.923668ms for pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:04.206250   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:04.401166   31154 request.go:632] Waited for 194.837724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m03
	I1001 19:23:04.401244   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m03
	I1001 19:23:04.401252   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.401266   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.401273   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.404292   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:23:04.601244   31154 request.go:632] Waited for 196.299344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:04.601300   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:04.601306   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.601313   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.601317   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.604470   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:04.605038   31154 pod_ready.go:93] pod "kube-scheduler-ha-193737-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:04.605055   31154 pod_ready.go:82] duration metric: took 398.796981ms for pod "kube-scheduler-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:04.605065   31154 pod_ready.go:39] duration metric: took 5.199943212s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 19:23:04.605079   31154 api_server.go:52] waiting for apiserver process to appear ...
	I1001 19:23:04.605144   31154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 19:23:04.623271   31154 api_server.go:72] duration metric: took 21.981652881s to wait for apiserver process to appear ...
	I1001 19:23:04.623293   31154 api_server.go:88] waiting for apiserver healthz status ...
	I1001 19:23:04.623314   31154 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I1001 19:23:04.631212   31154 api_server.go:279] https://192.168.39.14:8443/healthz returned 200:
	ok
	I1001 19:23:04.631285   31154 round_trippers.go:463] GET https://192.168.39.14:8443/version
	I1001 19:23:04.631295   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.631303   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.631310   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.632155   31154 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1001 19:23:04.632226   31154 api_server.go:141] control plane version: v1.31.1
	I1001 19:23:04.632243   31154 api_server.go:131] duration metric: took 8.942184ms to wait for apiserver health ...
	I1001 19:23:04.632254   31154 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 19:23:04.801981   31154 request.go:632] Waited for 169.64915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:23:04.802068   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:23:04.802079   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.802090   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.802102   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.809502   31154 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1001 19:23:04.815901   31154 system_pods.go:59] 24 kube-system pods found
	I1001 19:23:04.815930   31154 system_pods.go:61] "coredns-7c65d6cfc9-hd5hv" [31f0afff-5571-46d6-888f-8982c71ba191] Running
	I1001 19:23:04.815935   31154 system_pods.go:61] "coredns-7c65d6cfc9-v2wsx" [8e3dd318-5017-4ada-bf2f-61b640ee2030] Running
	I1001 19:23:04.815939   31154 system_pods.go:61] "etcd-ha-193737" [99c3674d-50d6-4160-89dd-afb3c2e71039] Running
	I1001 19:23:04.815943   31154 system_pods.go:61] "etcd-ha-193737-m02" [a541b9c8-c10e-45b4-ac9c-d16e8bf659b0] Running
	I1001 19:23:04.815946   31154 system_pods.go:61] "etcd-ha-193737-m03" [de61043b-ff4c-4d28-ab01-d63abf25ef30] Running
	I1001 19:23:04.815949   31154 system_pods.go:61] "kindnet-bqht8" [3cef1863-ae14-4ab4-bc4f-5545e058cc9c] Running
	I1001 19:23:04.815953   31154 system_pods.go:61] "kindnet-drdlr" [13177890-a0eb-47ff-8fe2-585992810e47] Running
	I1001 19:23:04.815955   31154 system_pods.go:61] "kindnet-wnr6g" [89e11419-0c5c-486e-bdbf-eaf6fab1e62c] Running
	I1001 19:23:04.815958   31154 system_pods.go:61] "kube-apiserver-ha-193737" [432bf6a6-4607-4f55-a026-d910075d3145] Running
	I1001 19:23:04.815961   31154 system_pods.go:61] "kube-apiserver-ha-193737-m02" [379a24af-2148-4870-9f4b-25ad48943445] Running
	I1001 19:23:04.815964   31154 system_pods.go:61] "kube-apiserver-ha-193737-m03" [fbf7fbec-142d-4402-9bcc-c3e25e11ac2e] Running
	I1001 19:23:04.815968   31154 system_pods.go:61] "kube-controller-manager-ha-193737" [95fb44db-8cb3-4db9-8a84-ab82e15760de] Running
	I1001 19:23:04.815971   31154 system_pods.go:61] "kube-controller-manager-ha-193737-m02" [5b0821e0-673a-440c-8ca1-fefbfe913b94] Running
	I1001 19:23:04.815974   31154 system_pods.go:61] "kube-controller-manager-ha-193737-m03" [fd854d14-6abb-42eb-b560-e816e86c6767] Running
	I1001 19:23:04.815981   31154 system_pods.go:61] "kube-proxy-4294m" [f454ca0f-d662-4dfd-ab77-ebccaf3a6b12] Running
	I1001 19:23:04.815987   31154 system_pods.go:61] "kube-proxy-9pm4t" [5dba191b-ba4a-4a22-80df-65afd1dcbfb5] Running
	I1001 19:23:04.815989   31154 system_pods.go:61] "kube-proxy-zpsll" [c18fec3c-2880-4860-b220-a44d5e523bed] Running
	I1001 19:23:04.815998   31154 system_pods.go:61] "kube-scheduler-ha-193737" [46ca2b37-6145-4111-9088-bcf51307be3a] Running
	I1001 19:23:04.816002   31154 system_pods.go:61] "kube-scheduler-ha-193737-m02" [72be6cd5-d226-46d6-b675-99014e544dfb] Running
	I1001 19:23:04.816005   31154 system_pods.go:61] "kube-scheduler-ha-193737-m03" [129167e7-febe-4de3-a35f-3f0e668c7a77] Running
	I1001 19:23:04.816008   31154 system_pods.go:61] "kube-vip-ha-193737" [cbe8e6a4-08f3-4db3-af4d-810a5592597c] Running
	I1001 19:23:04.816014   31154 system_pods.go:61] "kube-vip-ha-193737-m02" [da7dfa59-1b27-45ce-9e61-ad8da40ec548] Running
	I1001 19:23:04.816017   31154 system_pods.go:61] "kube-vip-ha-193737-m03" [7a9bbd2f-8b9a-4104-baf4-11efdd662028] Running
	I1001 19:23:04.816022   31154 system_pods.go:61] "storage-provisioner" [d5b587a6-418b-47e5-9bf7-3fb6fa5e3372] Running
	I1001 19:23:04.816027   31154 system_pods.go:74] duration metric: took 183.765578ms to wait for pod list to return data ...
	I1001 19:23:04.816036   31154 default_sa.go:34] waiting for default service account to be created ...
	I1001 19:23:05.001464   31154 request.go:632] Waited for 185.352635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
	I1001 19:23:05.001522   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
	I1001 19:23:05.001527   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:05.001534   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:05.001538   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:05.005437   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:05.005559   31154 default_sa.go:45] found service account: "default"
	I1001 19:23:05.005576   31154 default_sa.go:55] duration metric: took 189.530453ms for default service account to be created ...
	I1001 19:23:05.005589   31154 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 19:23:05.201939   31154 request.go:632] Waited for 196.276664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:23:05.201999   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:23:05.202009   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:05.202018   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:05.202026   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:05.208844   31154 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1001 19:23:05.215522   31154 system_pods.go:86] 24 kube-system pods found
	I1001 19:23:05.215551   31154 system_pods.go:89] "coredns-7c65d6cfc9-hd5hv" [31f0afff-5571-46d6-888f-8982c71ba191] Running
	I1001 19:23:05.215559   31154 system_pods.go:89] "coredns-7c65d6cfc9-v2wsx" [8e3dd318-5017-4ada-bf2f-61b640ee2030] Running
	I1001 19:23:05.215563   31154 system_pods.go:89] "etcd-ha-193737" [99c3674d-50d6-4160-89dd-afb3c2e71039] Running
	I1001 19:23:05.215567   31154 system_pods.go:89] "etcd-ha-193737-m02" [a541b9c8-c10e-45b4-ac9c-d16e8bf659b0] Running
	I1001 19:23:05.215570   31154 system_pods.go:89] "etcd-ha-193737-m03" [de61043b-ff4c-4d28-ab01-d63abf25ef30] Running
	I1001 19:23:05.215574   31154 system_pods.go:89] "kindnet-bqht8" [3cef1863-ae14-4ab4-bc4f-5545e058cc9c] Running
	I1001 19:23:05.215578   31154 system_pods.go:89] "kindnet-drdlr" [13177890-a0eb-47ff-8fe2-585992810e47] Running
	I1001 19:23:05.215581   31154 system_pods.go:89] "kindnet-wnr6g" [89e11419-0c5c-486e-bdbf-eaf6fab1e62c] Running
	I1001 19:23:05.215584   31154 system_pods.go:89] "kube-apiserver-ha-193737" [432bf6a6-4607-4f55-a026-d910075d3145] Running
	I1001 19:23:05.215588   31154 system_pods.go:89] "kube-apiserver-ha-193737-m02" [379a24af-2148-4870-9f4b-25ad48943445] Running
	I1001 19:23:05.215591   31154 system_pods.go:89] "kube-apiserver-ha-193737-m03" [fbf7fbec-142d-4402-9bcc-c3e25e11ac2e] Running
	I1001 19:23:05.215595   31154 system_pods.go:89] "kube-controller-manager-ha-193737" [95fb44db-8cb3-4db9-8a84-ab82e15760de] Running
	I1001 19:23:05.215598   31154 system_pods.go:89] "kube-controller-manager-ha-193737-m02" [5b0821e0-673a-440c-8ca1-fefbfe913b94] Running
	I1001 19:23:05.215601   31154 system_pods.go:89] "kube-controller-manager-ha-193737-m03" [fd854d14-6abb-42eb-b560-e816e86c6767] Running
	I1001 19:23:05.215603   31154 system_pods.go:89] "kube-proxy-4294m" [f454ca0f-d662-4dfd-ab77-ebccaf3a6b12] Running
	I1001 19:23:05.215606   31154 system_pods.go:89] "kube-proxy-9pm4t" [5dba191b-ba4a-4a22-80df-65afd1dcbfb5] Running
	I1001 19:23:05.215609   31154 system_pods.go:89] "kube-proxy-zpsll" [c18fec3c-2880-4860-b220-a44d5e523bed] Running
	I1001 19:23:05.215613   31154 system_pods.go:89] "kube-scheduler-ha-193737" [46ca2b37-6145-4111-9088-bcf51307be3a] Running
	I1001 19:23:05.215616   31154 system_pods.go:89] "kube-scheduler-ha-193737-m02" [72be6cd5-d226-46d6-b675-99014e544dfb] Running
	I1001 19:23:05.215621   31154 system_pods.go:89] "kube-scheduler-ha-193737-m03" [129167e7-febe-4de3-a35f-3f0e668c7a77] Running
	I1001 19:23:05.215626   31154 system_pods.go:89] "kube-vip-ha-193737" [cbe8e6a4-08f3-4db3-af4d-810a5592597c] Running
	I1001 19:23:05.215630   31154 system_pods.go:89] "kube-vip-ha-193737-m02" [da7dfa59-1b27-45ce-9e61-ad8da40ec548] Running
	I1001 19:23:05.215634   31154 system_pods.go:89] "kube-vip-ha-193737-m03" [7a9bbd2f-8b9a-4104-baf4-11efdd662028] Running
	I1001 19:23:05.215639   31154 system_pods.go:89] "storage-provisioner" [d5b587a6-418b-47e5-9bf7-3fb6fa5e3372] Running
	I1001 19:23:05.215647   31154 system_pods.go:126] duration metric: took 210.049347ms to wait for k8s-apps to be running ...
	I1001 19:23:05.215659   31154 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 19:23:05.215714   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 19:23:05.232730   31154 system_svc.go:56] duration metric: took 17.059785ms WaitForService to wait for kubelet
	I1001 19:23:05.232757   31154 kubeadm.go:582] duration metric: took 22.59114375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 19:23:05.232773   31154 node_conditions.go:102] verifying NodePressure condition ...
	I1001 19:23:05.401103   31154 request.go:632] Waited for 168.256226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes
	I1001 19:23:05.401154   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes
	I1001 19:23:05.401159   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:05.401165   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:05.401169   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:05.405382   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:23:05.406740   31154 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 19:23:05.406763   31154 node_conditions.go:123] node cpu capacity is 2
	I1001 19:23:05.406777   31154 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 19:23:05.406783   31154 node_conditions.go:123] node cpu capacity is 2
	I1001 19:23:05.406789   31154 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 19:23:05.406794   31154 node_conditions.go:123] node cpu capacity is 2
	I1001 19:23:05.406799   31154 node_conditions.go:105] duration metric: took 174.020761ms to run NodePressure ...
	I1001 19:23:05.406816   31154 start.go:241] waiting for startup goroutines ...
	I1001 19:23:05.406842   31154 start.go:255] writing updated cluster config ...
	I1001 19:23:05.407176   31154 ssh_runner.go:195] Run: rm -f paused
	I1001 19:23:05.459358   31154 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 19:23:05.461856   31154 out.go:177] * Done! kubectl is now configured to use "ha-193737" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.598337595Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810812598311449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6819dc7-b319-4e96-8564-a0800e20fd52 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.599275280Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=535c2ec2-1123-429a-be32-4d50a5cd053c name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.599369820Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=535c2ec2-1123-429a-be32-4d50a5cd053c name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.599610271Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727810590371295276,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75485355206ed8610939d128b62d0d55ec84cbb8615f7261469f854e52b6447d,PodSandboxId:7ea8efe8e5b7908a13d9ad3be6c8e7a9e871b332f5889126ea13429055eaaed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727810449416160580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449360614251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449354501899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-50
17-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278104
37213853474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727810437061931545,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c962c4138a0019c76f981b72e8948efd386403002bb686f7e70c38bc20c3d542,PodSandboxId:cb787d15fa3b89fa5c4a479def2aed57fb03a1d1abdf07f6170488ae8f5b774f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727810427447788769,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cf6ac3eb69fe181eb29ee323afb176,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727810424745051374,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727810424759083818,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c57920320eb65f4477cbff8aec818a7ecddb3461a77ca111172e4d1a5f7e71,PodSandboxId:f74fa319889b0624f5157b5a226af0e5a24f37f6922332d6e0037af95aa50aef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727810424668320387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9d05172b801a89ec36c0d0d549bfeee1aafb67f6586e3f4f163cb63f01d062,PodSandboxId:d6e9deea0a8069c598634405f46a9fa565d1d0b888716c0c94e4ed5b44588a10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727810424633610117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=535c2ec2-1123-429a-be32-4d50a5cd053c name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.637148329Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9cf04e3b-0f91-4080-b10f-14cb5fac005a name=/runtime.v1.RuntimeService/Version
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.637239685Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9cf04e3b-0f91-4080-b10f-14cb5fac005a name=/runtime.v1.RuntimeService/Version
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.638193596Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=81a09c00-c722-44a1-8e7e-93708907f62f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.638622806Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810812638598692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81a09c00-c722-44a1-8e7e-93708907f62f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.639190395Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aad65c14-bce0-4bb0-b23a-c6351f82b5e6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.639245217Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aad65c14-bce0-4bb0-b23a-c6351f82b5e6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.639521393Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727810590371295276,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75485355206ed8610939d128b62d0d55ec84cbb8615f7261469f854e52b6447d,PodSandboxId:7ea8efe8e5b7908a13d9ad3be6c8e7a9e871b332f5889126ea13429055eaaed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727810449416160580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449360614251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449354501899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-50
17-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278104
37213853474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727810437061931545,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c962c4138a0019c76f981b72e8948efd386403002bb686f7e70c38bc20c3d542,PodSandboxId:cb787d15fa3b89fa5c4a479def2aed57fb03a1d1abdf07f6170488ae8f5b774f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727810427447788769,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cf6ac3eb69fe181eb29ee323afb176,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727810424745051374,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727810424759083818,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c57920320eb65f4477cbff8aec818a7ecddb3461a77ca111172e4d1a5f7e71,PodSandboxId:f74fa319889b0624f5157b5a226af0e5a24f37f6922332d6e0037af95aa50aef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727810424668320387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9d05172b801a89ec36c0d0d549bfeee1aafb67f6586e3f4f163cb63f01d062,PodSandboxId:d6e9deea0a8069c598634405f46a9fa565d1d0b888716c0c94e4ed5b44588a10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727810424633610117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aad65c14-bce0-4bb0-b23a-c6351f82b5e6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.677892790Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c8aec6dd-3353-432f-8a6b-4f7cca3a15cd name=/runtime.v1.RuntimeService/Version
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.677988180Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c8aec6dd-3353-432f-8a6b-4f7cca3a15cd name=/runtime.v1.RuntimeService/Version
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.679313401Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=42b48f8c-9226-4197-bdda-2383f7329faa name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.679822397Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810812679796532,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42b48f8c-9226-4197-bdda-2383f7329faa name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.680563583Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c14723a5-25d0-492c-8b28-28931fa98492 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.680636665Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c14723a5-25d0-492c-8b28-28931fa98492 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.680966495Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727810590371295276,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75485355206ed8610939d128b62d0d55ec84cbb8615f7261469f854e52b6447d,PodSandboxId:7ea8efe8e5b7908a13d9ad3be6c8e7a9e871b332f5889126ea13429055eaaed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727810449416160580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449360614251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449354501899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-50
17-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278104
37213853474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727810437061931545,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c962c4138a0019c76f981b72e8948efd386403002bb686f7e70c38bc20c3d542,PodSandboxId:cb787d15fa3b89fa5c4a479def2aed57fb03a1d1abdf07f6170488ae8f5b774f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727810427447788769,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cf6ac3eb69fe181eb29ee323afb176,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727810424745051374,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727810424759083818,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c57920320eb65f4477cbff8aec818a7ecddb3461a77ca111172e4d1a5f7e71,PodSandboxId:f74fa319889b0624f5157b5a226af0e5a24f37f6922332d6e0037af95aa50aef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727810424668320387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9d05172b801a89ec36c0d0d549bfeee1aafb67f6586e3f4f163cb63f01d062,PodSandboxId:d6e9deea0a8069c598634405f46a9fa565d1d0b888716c0c94e4ed5b44588a10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727810424633610117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c14723a5-25d0-492c-8b28-28931fa98492 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.716398846Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=977ceb7d-a1ff-47f7-988d-15916413b469 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.716477461Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=977ceb7d-a1ff-47f7-988d-15916413b469 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.717359742Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9001372d-a85b-4c29-8696-b373f11ec3c0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.717817091Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810812717794853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9001372d-a85b-4c29-8696-b373f11ec3c0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.718272085Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eabb04d3-f370-4889-adfd-7040ff24cc96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.718320406Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eabb04d3-f370-4889-adfd-7040ff24cc96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:52 ha-193737 crio[661]: time="2024-10-01 19:26:52.718558169Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727810590371295276,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75485355206ed8610939d128b62d0d55ec84cbb8615f7261469f854e52b6447d,PodSandboxId:7ea8efe8e5b7908a13d9ad3be6c8e7a9e871b332f5889126ea13429055eaaed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727810449416160580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449360614251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449354501899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-50
17-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278104
37213853474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727810437061931545,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c962c4138a0019c76f981b72e8948efd386403002bb686f7e70c38bc20c3d542,PodSandboxId:cb787d15fa3b89fa5c4a479def2aed57fb03a1d1abdf07f6170488ae8f5b774f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727810427447788769,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cf6ac3eb69fe181eb29ee323afb176,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727810424745051374,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727810424759083818,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c57920320eb65f4477cbff8aec818a7ecddb3461a77ca111172e4d1a5f7e71,PodSandboxId:f74fa319889b0624f5157b5a226af0e5a24f37f6922332d6e0037af95aa50aef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727810424668320387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9d05172b801a89ec36c0d0d549bfeee1aafb67f6586e3f4f163cb63f01d062,PodSandboxId:d6e9deea0a8069c598634405f46a9fa565d1d0b888716c0c94e4ed5b44588a10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727810424633610117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eabb04d3-f370-4889-adfd-7040ff24cc96 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d523f1298c385       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   8ddf36dc2effd       busybox-7dff88458-rbjkx
	75485355206ed       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   7ea8efe8e5b79       storage-provisioner
	b9a32cfd9baec       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   b4ab4980fd9c6       coredns-7c65d6cfc9-hd5hv
	c598f8345f1d8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   69e4ceb6e3399       coredns-7c65d6cfc9-v2wsx
	25b91984e532b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   f7fcfb918d1fd       kindnet-wnr6g
	6ce5a1ca06729       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   65474abfbeabf       kube-proxy-zpsll
	c962c4138a001       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   cb787d15fa3b8       kube-vip-ha-193737
	7092a3841df08       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   c74bc4df7851a       etcd-ha-193737
	d7d722793679c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   4873897c8ffd7       kube-scheduler-ha-193737
	d2c57920320eb       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   f74fa319889b0       kube-apiserver-ha-193737
	fc9d05172b801       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   d6e9deea0a806       kube-controller-manager-ha-193737
	
	
	==> coredns [b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3] <==
	[INFO] 10.244.1.2:43526 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.003536908s
	[INFO] 10.244.1.2:59594 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.012224538s
	[INFO] 10.244.2.2:37785 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000112105s
	[INFO] 10.244.0.4:34398 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000118394s
	[INFO] 10.244.0.4:35218 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001965777s
	[INFO] 10.244.1.2:56827 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00018086s
	[INFO] 10.244.1.2:50439 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003922693s
	[INFO] 10.244.2.2:33611 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123417s
	[INFO] 10.244.2.2:37877 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204398s
	[INFO] 10.244.2.2:42894 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164711s
	[INFO] 10.244.0.4:58512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012749s
	[INFO] 10.244.0.4:60496 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126088s
	[INFO] 10.244.0.4:42876 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054151s
	[INFO] 10.244.0.4:46048 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001023388s
	[INFO] 10.244.0.4:45307 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069619s
	[INFO] 10.244.0.4:54830 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086737s
	[INFO] 10.244.1.2:56566 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104818s
	[INFO] 10.244.2.2:44960 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017462s
	[INFO] 10.244.2.2:35520 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000147677s
	[INFO] 10.244.0.4:34887 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000089068s
	[INFO] 10.244.0.4:47038 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093137s
	[INFO] 10.244.1.2:44935 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000181924s
	[INFO] 10.244.2.2:51593 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000184246s
	[INFO] 10.244.2.2:37070 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101666s
	[INFO] 10.244.0.4:49420 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115127s
	
	
	==> coredns [c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a] <==
	[INFO] 10.244.1.2:42880 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000139838s
	[INFO] 10.244.1.2:41832 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000162686s
	[INFO] 10.244.1.2:46697 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110911s
	[INFO] 10.244.2.2:37495 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001830157s
	[INFO] 10.244.2.2:39183 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155283s
	[INFO] 10.244.2.2:47614 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170182s
	[INFO] 10.244.2.2:52937 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001095974s
	[INFO] 10.244.2.2:59751 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106474s
	[INFO] 10.244.0.4:55786 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001514187s
	[INFO] 10.244.0.4:56387 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000050769s
	[INFO] 10.244.1.2:54787 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013733s
	[INFO] 10.244.1.2:58281 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113165s
	[INFO] 10.244.1.2:48712 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097722s
	[INFO] 10.244.2.2:57237 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152523s
	[INFO] 10.244.2.2:47314 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106445s
	[INFO] 10.244.0.4:43887 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000199016s
	[INFO] 10.244.0.4:49901 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000240769s
	[INFO] 10.244.1.2:54100 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210259s
	[INFO] 10.244.1.2:60342 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000221646s
	[INFO] 10.244.1.2:33783 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000165277s
	[INFO] 10.244.2.2:45378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197846s
	[INFO] 10.244.2.2:33324 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101556s
	[INFO] 10.244.0.4:40016 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000071122s
	[INFO] 10.244.0.4:40114 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135338s
	[INFO] 10.244.0.4:53904 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00006854s
	
	
	==> describe nodes <==
	Name:               ha-193737
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-193737
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=ha-193737
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T19_20_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:20:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-193737
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:26:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:23:34 +0000   Tue, 01 Oct 2024 19:20:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:23:34 +0000   Tue, 01 Oct 2024 19:20:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:23:34 +0000   Tue, 01 Oct 2024 19:20:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:23:34 +0000   Tue, 01 Oct 2024 19:20:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    ha-193737
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 008c1ccd624b4ab3b90055ff9f65b018
	  System UUID:                008c1ccd-624b-4ab3-b900-55ff9f65b018
	  Boot ID:                    ad12c9f1-7a18-4d35-9ec9-00d91da3365b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rbjkx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 coredns-7c65d6cfc9-hd5hv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m17s
	  kube-system                 coredns-7c65d6cfc9-v2wsx             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m17s
	  kube-system                 etcd-ha-193737                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m22s
	  kube-system                 kindnet-wnr6g                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m18s
	  kube-system                 kube-apiserver-ha-193737             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-controller-manager-ha-193737    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-proxy-zpsll                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-scheduler-ha-193737             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-vip-ha-193737                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m15s                  kube-proxy       
	  Normal  Starting                 6m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     6m29s (x7 over 6m30s)  kubelet          Node ha-193737 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m29s (x8 over 6m30s)  kubelet          Node ha-193737 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m29s (x8 over 6m30s)  kubelet          Node ha-193737 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  6m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m23s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m22s                  kubelet          Node ha-193737 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m22s                  kubelet          Node ha-193737 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m22s                  kubelet          Node ha-193737 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m18s                  node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	  Normal  NodeReady                6m5s                   kubelet          Node ha-193737 status is now: NodeReady
	  Normal  RegisteredNode           5m22s                  node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	
	
	Name:               ha-193737-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-193737-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=ha-193737
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T19_21_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:21:23 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-193737-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:24:17 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 01 Oct 2024 19:23:25 +0000   Tue, 01 Oct 2024 19:25:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 01 Oct 2024 19:23:25 +0000   Tue, 01 Oct 2024 19:25:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 01 Oct 2024 19:23:25 +0000   Tue, 01 Oct 2024 19:25:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 01 Oct 2024 19:23:25 +0000   Tue, 01 Oct 2024 19:25:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-193737-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e20c76476d7c4acaa5fd75e5b8fa3bab
	  System UUID:                e20c7647-6d7c-4aca-a5fd-75e5b8fa3bab
	  Boot ID:                    6ae84c19-5df4-457f-b75c-eae86d5e0ee1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fz5bb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 etcd-ha-193737-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m28s
	  kube-system                 kindnet-drdlr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m30s
	  kube-system                 kube-apiserver-ha-193737-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-controller-manager-ha-193737-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-proxy-4294m                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-scheduler-ha-193737-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-vip-ha-193737-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m25s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m30s (x8 over 5m30s)  kubelet          Node ha-193737-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m30s (x8 over 5m30s)  kubelet          Node ha-193737-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m30s (x7 over 5m30s)  kubelet          Node ha-193737-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m28s                  node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	  Normal  RegisteredNode           5m22s                  node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	  Normal  NodeNotReady             113s                   node-controller  Node ha-193737-m02 status is now: NodeNotReady
	
	
	Name:               ha-193737-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-193737-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=ha-193737
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T19_22_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:22:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-193737-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:26:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:23:39 +0000   Tue, 01 Oct 2024 19:22:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:23:39 +0000   Tue, 01 Oct 2024 19:22:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:23:39 +0000   Tue, 01 Oct 2024 19:22:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:23:39 +0000   Tue, 01 Oct 2024 19:22:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-193737-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f175e16bf19e4217880e926a75ac0065
	  System UUID:                f175e16b-f19e-4217-880e-926a75ac0065
	  Boot ID:                    5dc1c664-a01d-46eb-a066-a1970597b392
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qzzzv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 etcd-ha-193737-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m13s
	  kube-system                 kindnet-bqht8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m15s
	  kube-system                 kube-apiserver-ha-193737-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-controller-manager-ha-193737-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-proxy-9pm4t                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-scheduler-ha-193737-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-vip-ha-193737-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m15s (x8 over 4m15s)  kubelet          Node ha-193737-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m15s (x8 over 4m15s)  kubelet          Node ha-193737-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m15s (x7 over 4m15s)  kubelet          Node ha-193737-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-193737-m03 event: Registered Node ha-193737-m03 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-193737-m03 event: Registered Node ha-193737-m03 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-193737-m03 event: Registered Node ha-193737-m03 in Controller
	
	
	Name:               ha-193737-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-193737-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=ha-193737
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T19_23_47_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:23:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-193737-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:26:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:24:17 +0000   Tue, 01 Oct 2024 19:23:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:24:17 +0000   Tue, 01 Oct 2024 19:23:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:24:17 +0000   Tue, 01 Oct 2024 19:23:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:24:17 +0000   Tue, 01 Oct 2024 19:24:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.152
	  Hostname:    ha-193737-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef1097b5e0604ff19d7361f2921010b9
	  System UUID:                ef1097b5-e060-4ff1-9d73-61f2921010b9
	  Boot ID:                    e616be63-4a8a-41b8-a0fc-2b1d892a1200
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-h886q       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m7s
	  kube-system                 kube-proxy-hz2nn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  3m7s (x3 over 3m7s)  kubelet          Node ha-193737-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s (x3 over 3m7s)  kubelet          Node ha-193737-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s (x3 over 3m7s)  kubelet          Node ha-193737-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal  NodeReady                2m47s                kubelet          Node ha-193737-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 1 19:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050773] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037054] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.754509] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.921161] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Oct 1 19:20] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.804167] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.059657] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065329] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.157689] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.148971] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.256595] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +3.897654] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +5.026995] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.059544] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.061605] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.119912] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.150839] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.375138] kauditd_printk_skb: 38 callbacks suppressed
	[Oct 1 19:21] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e] <==
	{"level":"warn","ts":"2024-10-01T19:26:52.659150Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:52.758550Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:52.789082Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:52.981790Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:52.985563Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:52.995183Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:53.001683Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:53.008120Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:53.011511Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:53.016001Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:53.021472Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:53.028072Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:53.035399Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:53.042867Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:53.045946Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:53.059544Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:53.094085Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:53.100655Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:53.108183Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:53.111859Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:53.114796Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:53.121592Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:53.128914Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:53.135605Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:53.159087Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:26:53 up 7 min,  0 users,  load average: 0.35, 0.33, 0.18
	Linux ha-193737 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525] <==
	I1001 19:26:18.356246       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:26:28.353932       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:26:28.354077       1 main.go:299] handling current node
	I1001 19:26:28.354108       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:26:28.354126       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:26:28.354260       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I1001 19:26:28.354312       1 main.go:322] Node ha-193737-m03 has CIDR [10.244.2.0/24] 
	I1001 19:26:28.354433       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:26:28.354480       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:26:38.345063       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:26:38.345186       1 main.go:299] handling current node
	I1001 19:26:38.345230       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:26:38.345253       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:26:38.345420       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I1001 19:26:38.345447       1 main.go:322] Node ha-193737-m03 has CIDR [10.244.2.0/24] 
	I1001 19:26:38.345532       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:26:38.345554       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:26:48.348795       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:26:48.348915       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:26:48.349232       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I1001 19:26:48.349245       1 main.go:322] Node ha-193737-m03 has CIDR [10.244.2.0/24] 
	I1001 19:26:48.349309       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:26:48.349316       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:26:48.349384       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:26:48.349392       1 main.go:299] handling current node
	
	
	==> kube-apiserver [d2c57920320eb65f4477cbff8aec818a7ecddb3461a77ca111172e4d1a5f7e71] <==
	I1001 19:20:35.856444       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1001 19:20:35.965501       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1001 19:21:24.240949       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1001 19:21:24.240967       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 17.015µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1001 19:21:24.242740       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1001 19:21:24.244065       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1001 19:21:24.245377       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.686767ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1001 19:23:11.375797       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53914: use of closed network connection
	E1001 19:23:11.551258       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53928: use of closed network connection
	E1001 19:23:11.731362       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53936: use of closed network connection
	E1001 19:23:11.972041       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53954: use of closed network connection
	E1001 19:23:12.366625       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53984: use of closed network connection
	E1001 19:23:12.546073       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54012: use of closed network connection
	E1001 19:23:12.732610       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54022: use of closed network connection
	E1001 19:23:12.902151       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54038: use of closed network connection
	E1001 19:23:13.375286       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54102: use of closed network connection
	E1001 19:23:13.554664       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54126: use of closed network connection
	E1001 19:23:13.743236       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54138: use of closed network connection
	E1001 19:23:13.926913       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54164: use of closed network connection
	E1001 19:23:14.106331       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54176: use of closed network connection
	E1001 19:23:47.033544       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1001 19:23:47.034526       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 71.236µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1001 19:23:47.042011       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1001 19:23:47.046959       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1001 19:23:47.048673       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="15.259067ms" method="POST" path="/api/v1/namespaces/default/events" result=null
	
	
	==> kube-controller-manager [fc9d05172b801a89ec36c0d0d549bfeee1aafb67f6586e3f4f163cb63f01d062] <==
	I1001 19:23:46.953662       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-193737-m04\" does not exist"
	I1001 19:23:46.986878       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-193737-m04" podCIDRs=["10.244.3.0/24"]
	I1001 19:23:46.986941       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:46.987007       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:47.215804       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:47.592799       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:50.155095       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-193737-m04"
	I1001 19:23:50.259908       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:51.578375       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:51.680209       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:51.931826       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:52.014093       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:57.305544       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:24:06.597966       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:24:06.598358       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-193737-m04"
	I1001 19:24:06.614401       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:24:06.949883       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:24:17.699273       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:25:00.186561       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m02"
	I1001 19:25:00.186799       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-193737-m04"
	I1001 19:25:00.216973       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m02"
	I1001 19:25:00.303275       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="68.678995ms"
	I1001 19:25:00.303561       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="142.589µs"
	I1001 19:25:01.983529       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m02"
	I1001 19:25:05.453661       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m02"
	
	
	==> kube-proxy [6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 19:20:37.420079       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 19:20:37.442921       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.14"]
	E1001 19:20:37.443047       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 19:20:37.482251       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 19:20:37.482297       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 19:20:37.482322       1 server_linux.go:169] "Using iptables Proxier"
	I1001 19:20:37.485863       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 19:20:37.486623       1 server.go:483] "Version info" version="v1.31.1"
	I1001 19:20:37.486654       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:20:37.489107       1 config.go:199] "Starting service config controller"
	I1001 19:20:37.489328       1 config.go:105] "Starting endpoint slice config controller"
	I1001 19:20:37.489656       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 19:20:37.489772       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 19:20:37.491468       1 config.go:328] "Starting node config controller"
	I1001 19:20:37.491495       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 19:20:37.590528       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 19:20:37.590619       1 shared_informer.go:320] Caches are synced for service config
	I1001 19:20:37.591994       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7] <==
	E1001 19:20:29.084572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1001 19:20:30.974700       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1001 19:23:06.369501       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rbjkx\": pod busybox-7dff88458-rbjkx is already assigned to node \"ha-193737\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rbjkx" node="ha-193737"
	E1001 19:23:06.370091       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ba3ecbe1-fb88-4674-b679-a442b28cd68e(default/busybox-7dff88458-rbjkx) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-rbjkx"
	E1001 19:23:06.370388       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rbjkx\": pod busybox-7dff88458-rbjkx is already assigned to node \"ha-193737\"" pod="default/busybox-7dff88458-rbjkx"
	I1001 19:23:06.374870       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-rbjkx" node="ha-193737"
	E1001 19:23:06.474319       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-9k8vh is already present in the active queue" pod="default/busybox-7dff88458-9k8vh"
	E1001 19:23:06.510626       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-x4nmn is already present in the active queue" pod="default/busybox-7dff88458-x4nmn"
	E1001 19:23:47.032927       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-tfcsk\": pod kindnet-tfcsk is already assigned to node \"ha-193737-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-tfcsk" node="ha-193737-m04"
	E1001 19:23:47.033064       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-tfcsk\": pod kindnet-tfcsk is already assigned to node \"ha-193737-m04\"" pod="kube-system/kindnet-tfcsk"
	E1001 19:23:47.032927       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-hz2nn\": pod kube-proxy-hz2nn is already assigned to node \"ha-193737-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-hz2nn" node="ha-193737-m04"
	E1001 19:23:47.045815       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4f960179-106c-4201-b54b-eea8c5aea0dc(kube-system/kube-proxy-hz2nn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-hz2nn"
	E1001 19:23:47.046589       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-hz2nn\": pod kube-proxy-hz2nn is already assigned to node \"ha-193737-m04\"" pod="kube-system/kube-proxy-hz2nn"
	I1001 19:23:47.046769       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-hz2nn" node="ha-193737-m04"
	E1001 19:23:47.062993       1 schedule_one.go:953] "Scheduler cache AssumePod failed" err="pod 046c48a4-b41b-4a77-8949-aa553947416b(kube-system/kindnet-h886q) is in the cache, so can't be assumed" pod="kube-system/kindnet-h886q"
	E1001 19:23:47.065004       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="pod 046c48a4-b41b-4a77-8949-aa553947416b(kube-system/kindnet-h886q) is in the cache, so can't be assumed" pod="kube-system/kindnet-h886q"
	I1001 19:23:47.065109       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-h886q" node="ha-193737-m04"
	E1001 19:23:47.081592       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-z5qhk\": pod kube-proxy-z5qhk is already assigned to node \"ha-193737-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-z5qhk" node="ha-193737-m04"
	E1001 19:23:47.081864       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 785d6c85-2697-4f02-80a4-55483a0faa64(kube-system/kube-proxy-z5qhk) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-z5qhk"
	E1001 19:23:47.081920       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-z5qhk\": pod kube-proxy-z5qhk is already assigned to node \"ha-193737-m04\"" pod="kube-system/kube-proxy-z5qhk"
	I1001 19:23:47.083299       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-z5qhk" node="ha-193737-m04"
	E1001 19:23:47.138476       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4q2pc\": pod kindnet-4q2pc is already assigned to node \"ha-193737-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4q2pc" node="ha-193737-m04"
	E1001 19:23:47.138649       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f23b02a5-c64e-44c3-83b9-7192d19a6efc(kube-system/kindnet-4q2pc) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-4q2pc"
	E1001 19:23:47.138779       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4q2pc\": pod kindnet-4q2pc is already assigned to node \"ha-193737-m04\"" pod="kube-system/kindnet-4q2pc"
	I1001 19:23:47.138823       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-4q2pc" node="ha-193737-m04"
	
	
	==> kubelet <==
	Oct 01 19:25:31 ha-193737 kubelet[1313]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 19:25:31 ha-193737 kubelet[1313]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 19:25:31 ha-193737 kubelet[1313]: E1001 19:25:31.112855    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810731112438565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:25:31 ha-193737 kubelet[1313]: E1001 19:25:31.112899    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810731112438565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:25:41 ha-193737 kubelet[1313]: E1001 19:25:41.114457    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810741114104863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:25:41 ha-193737 kubelet[1313]: E1001 19:25:41.114791    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810741114104863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:25:51 ha-193737 kubelet[1313]: E1001 19:25:51.116278    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810751115811001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:25:51 ha-193737 kubelet[1313]: E1001 19:25:51.116653    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810751115811001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:01 ha-193737 kubelet[1313]: E1001 19:26:01.119303    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810761118827447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:01 ha-193737 kubelet[1313]: E1001 19:26:01.119351    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810761118827447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:11 ha-193737 kubelet[1313]: E1001 19:26:11.121360    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810771121035313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:11 ha-193737 kubelet[1313]: E1001 19:26:11.121412    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810771121035313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:21 ha-193737 kubelet[1313]: E1001 19:26:21.123512    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810781123120430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:21 ha-193737 kubelet[1313]: E1001 19:26:21.123938    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810781123120430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:31 ha-193737 kubelet[1313]: E1001 19:26:31.044582    1313 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 19:26:31 ha-193737 kubelet[1313]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 19:26:31 ha-193737 kubelet[1313]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 19:26:31 ha-193737 kubelet[1313]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 19:26:31 ha-193737 kubelet[1313]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 19:26:31 ha-193737 kubelet[1313]: E1001 19:26:31.126194    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810791125910385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:31 ha-193737 kubelet[1313]: E1001 19:26:31.126217    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810791125910385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:41 ha-193737 kubelet[1313]: E1001 19:26:41.128087    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810801127576002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:41 ha-193737 kubelet[1313]: E1001 19:26:41.128431    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810801127576002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:51 ha-193737 kubelet[1313]: E1001 19:26:51.130945    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810811130429680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:51 ha-193737 kubelet[1313]: E1001 19:26:51.131267    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810811130429680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-193737 -n ha-193737
helpers_test.go:261: (dbg) Run:  kubectl --context ha-193737 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-193737 status -v=7 --alsologtostderr: (4.23206642s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-193737 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-193737 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-193737 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-193737 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-193737 -n ha-193737
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 logs -n 25
E1001 19:26:59.024953   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-193737 logs -n 25: (1.430881597s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m03:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737:/home/docker/cp-test_ha-193737-m03_ha-193737.txt                       |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737 sudo cat                                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m03_ha-193737.txt                                 |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m03:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m02:/home/docker/cp-test_ha-193737-m03_ha-193737-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737-m02 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m03_ha-193737-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m03:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04:/home/docker/cp-test_ha-193737-m03_ha-193737-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737-m04 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m03_ha-193737-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-193737 cp testdata/cp-test.txt                                                | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1122815483/001/cp-test_ha-193737-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737:/home/docker/cp-test_ha-193737-m04_ha-193737.txt                       |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737 sudo cat                                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m04_ha-193737.txt                                 |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m02:/home/docker/cp-test_ha-193737-m04_ha-193737-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737-m02 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m04_ha-193737-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03:/home/docker/cp-test_ha-193737-m04_ha-193737-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737-m03 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m04_ha-193737-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-193737 node stop m02 -v=7                                                     | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-193737 node start m02 -v=7                                                    | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 19:19:47
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 19:19:47.806967   31154 out.go:345] Setting OutFile to fd 1 ...
	I1001 19:19:47.807072   31154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:19:47.807081   31154 out.go:358] Setting ErrFile to fd 2...
	I1001 19:19:47.807085   31154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:19:47.807300   31154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 19:19:47.807883   31154 out.go:352] Setting JSON to false
	I1001 19:19:47.808862   31154 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3730,"bootTime":1727806658,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 19:19:47.808959   31154 start.go:139] virtualization: kvm guest
	I1001 19:19:47.810915   31154 out.go:177] * [ha-193737] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 19:19:47.812033   31154 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 19:19:47.812047   31154 notify.go:220] Checking for updates...
	I1001 19:19:47.814140   31154 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 19:19:47.815207   31154 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:19:47.816467   31154 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:19:47.817736   31154 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 19:19:47.818886   31154 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 19:19:47.820159   31154 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 19:19:47.855456   31154 out.go:177] * Using the kvm2 driver based on user configuration
	I1001 19:19:47.856527   31154 start.go:297] selected driver: kvm2
	I1001 19:19:47.856547   31154 start.go:901] validating driver "kvm2" against <nil>
	I1001 19:19:47.856562   31154 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 19:19:47.857294   31154 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 19:19:47.857376   31154 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 19:19:47.872487   31154 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 19:19:47.872546   31154 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 19:19:47.872796   31154 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 19:19:47.872826   31154 cni.go:84] Creating CNI manager for ""
	I1001 19:19:47.872874   31154 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1001 19:19:47.872886   31154 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 19:19:47.872938   31154 start.go:340] cluster config:
	{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1001 19:19:47.873050   31154 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 19:19:47.874719   31154 out.go:177] * Starting "ha-193737" primary control-plane node in "ha-193737" cluster
	I1001 19:19:47.875804   31154 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:19:47.875840   31154 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 19:19:47.875850   31154 cache.go:56] Caching tarball of preloaded images
	I1001 19:19:47.875957   31154 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 19:19:47.875970   31154 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 19:19:47.876255   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:19:47.876273   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json: {Name:mk44677a1f0c01c3be022903d4a146ca8f437dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:19:47.876454   31154 start.go:360] acquireMachinesLock for ha-193737: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 19:19:47.876490   31154 start.go:364] duration metric: took 20.799µs to acquireMachinesLock for "ha-193737"
	I1001 19:19:47.876512   31154 start.go:93] Provisioning new machine with config: &{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:19:47.876581   31154 start.go:125] createHost starting for "" (driver="kvm2")
	I1001 19:19:47.878132   31154 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 19:19:47.878257   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:19:47.878301   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:19:47.892637   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32839
	I1001 19:19:47.893161   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:19:47.893766   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:19:47.893788   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:19:47.894083   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:19:47.894225   31154 main.go:141] libmachine: (ha-193737) Calling .GetMachineName
	I1001 19:19:47.894350   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:19:47.894482   31154 start.go:159] libmachine.API.Create for "ha-193737" (driver="kvm2")
	I1001 19:19:47.894506   31154 client.go:168] LocalClient.Create starting
	I1001 19:19:47.894539   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem
	I1001 19:19:47.894575   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:19:47.894607   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:19:47.894667   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem
	I1001 19:19:47.894686   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:19:47.894699   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:19:47.894713   31154 main.go:141] libmachine: Running pre-create checks...
	I1001 19:19:47.894730   31154 main.go:141] libmachine: (ha-193737) Calling .PreCreateCheck
	I1001 19:19:47.895057   31154 main.go:141] libmachine: (ha-193737) Calling .GetConfigRaw
	I1001 19:19:47.895392   31154 main.go:141] libmachine: Creating machine...
	I1001 19:19:47.895405   31154 main.go:141] libmachine: (ha-193737) Calling .Create
	I1001 19:19:47.895568   31154 main.go:141] libmachine: (ha-193737) Creating KVM machine...
	I1001 19:19:47.896749   31154 main.go:141] libmachine: (ha-193737) DBG | found existing default KVM network
	I1001 19:19:47.897409   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:47.897251   31177 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I1001 19:19:47.897459   31154 main.go:141] libmachine: (ha-193737) DBG | created network xml: 
	I1001 19:19:47.897477   31154 main.go:141] libmachine: (ha-193737) DBG | <network>
	I1001 19:19:47.897495   31154 main.go:141] libmachine: (ha-193737) DBG |   <name>mk-ha-193737</name>
	I1001 19:19:47.897509   31154 main.go:141] libmachine: (ha-193737) DBG |   <dns enable='no'/>
	I1001 19:19:47.897529   31154 main.go:141] libmachine: (ha-193737) DBG |   
	I1001 19:19:47.897549   31154 main.go:141] libmachine: (ha-193737) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1001 19:19:47.897562   31154 main.go:141] libmachine: (ha-193737) DBG |     <dhcp>
	I1001 19:19:47.897573   31154 main.go:141] libmachine: (ha-193737) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1001 19:19:47.897582   31154 main.go:141] libmachine: (ha-193737) DBG |     </dhcp>
	I1001 19:19:47.897589   31154 main.go:141] libmachine: (ha-193737) DBG |   </ip>
	I1001 19:19:47.897594   31154 main.go:141] libmachine: (ha-193737) DBG |   
	I1001 19:19:47.897599   31154 main.go:141] libmachine: (ha-193737) DBG | </network>
	I1001 19:19:47.897608   31154 main.go:141] libmachine: (ha-193737) DBG | 
	I1001 19:19:47.902355   31154 main.go:141] libmachine: (ha-193737) DBG | trying to create private KVM network mk-ha-193737 192.168.39.0/24...
	I1001 19:19:47.965826   31154 main.go:141] libmachine: (ha-193737) DBG | private KVM network mk-ha-193737 192.168.39.0/24 created
	I1001 19:19:47.965857   31154 main.go:141] libmachine: (ha-193737) Setting up store path in /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737 ...
	I1001 19:19:47.965875   31154 main.go:141] libmachine: (ha-193737) Building disk image from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 19:19:47.965943   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:47.965838   31177 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:19:47.966014   31154 main.go:141] libmachine: (ha-193737) Downloading /home/jenkins/minikube-integration/19736-11198/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 19:19:48.225463   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:48.225322   31177 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa...
	I1001 19:19:48.498755   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:48.498602   31177 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/ha-193737.rawdisk...
	I1001 19:19:48.498778   31154 main.go:141] libmachine: (ha-193737) DBG | Writing magic tar header
	I1001 19:19:48.498788   31154 main.go:141] libmachine: (ha-193737) DBG | Writing SSH key tar header
	I1001 19:19:48.498813   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:48.498738   31177 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737 ...
	I1001 19:19:48.498825   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737
	I1001 19:19:48.498844   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737 (perms=drwx------)
	I1001 19:19:48.498866   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines (perms=drwxr-xr-x)
	I1001 19:19:48.498875   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube (perms=drwxr-xr-x)
	I1001 19:19:48.498909   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines
	I1001 19:19:48.498961   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198 (perms=drwxrwxr-x)
	I1001 19:19:48.498975   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:19:48.498992   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198
	I1001 19:19:48.499012   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 19:19:48.499035   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 19:19:48.499048   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 19:19:48.499056   31154 main.go:141] libmachine: (ha-193737) Creating domain...
	I1001 19:19:48.499066   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins
	I1001 19:19:48.499074   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home
	I1001 19:19:48.499095   31154 main.go:141] libmachine: (ha-193737) DBG | Skipping /home - not owner
	I1001 19:19:48.500091   31154 main.go:141] libmachine: (ha-193737) define libvirt domain using xml: 
	I1001 19:19:48.500110   31154 main.go:141] libmachine: (ha-193737) <domain type='kvm'>
	I1001 19:19:48.500119   31154 main.go:141] libmachine: (ha-193737)   <name>ha-193737</name>
	I1001 19:19:48.500128   31154 main.go:141] libmachine: (ha-193737)   <memory unit='MiB'>2200</memory>
	I1001 19:19:48.500140   31154 main.go:141] libmachine: (ha-193737)   <vcpu>2</vcpu>
	I1001 19:19:48.500149   31154 main.go:141] libmachine: (ha-193737)   <features>
	I1001 19:19:48.500155   31154 main.go:141] libmachine: (ha-193737)     <acpi/>
	I1001 19:19:48.500161   31154 main.go:141] libmachine: (ha-193737)     <apic/>
	I1001 19:19:48.500166   31154 main.go:141] libmachine: (ha-193737)     <pae/>
	I1001 19:19:48.500178   31154 main.go:141] libmachine: (ha-193737)     
	I1001 19:19:48.500186   31154 main.go:141] libmachine: (ha-193737)   </features>
	I1001 19:19:48.500190   31154 main.go:141] libmachine: (ha-193737)   <cpu mode='host-passthrough'>
	I1001 19:19:48.500271   31154 main.go:141] libmachine: (ha-193737)   
	I1001 19:19:48.500322   31154 main.go:141] libmachine: (ha-193737)   </cpu>
	I1001 19:19:48.500344   31154 main.go:141] libmachine: (ha-193737)   <os>
	I1001 19:19:48.500376   31154 main.go:141] libmachine: (ha-193737)     <type>hvm</type>
	I1001 19:19:48.500385   31154 main.go:141] libmachine: (ha-193737)     <boot dev='cdrom'/>
	I1001 19:19:48.500394   31154 main.go:141] libmachine: (ha-193737)     <boot dev='hd'/>
	I1001 19:19:48.500402   31154 main.go:141] libmachine: (ha-193737)     <bootmenu enable='no'/>
	I1001 19:19:48.500407   31154 main.go:141] libmachine: (ha-193737)   </os>
	I1001 19:19:48.500422   31154 main.go:141] libmachine: (ha-193737)   <devices>
	I1001 19:19:48.500428   31154 main.go:141] libmachine: (ha-193737)     <disk type='file' device='cdrom'>
	I1001 19:19:48.500438   31154 main.go:141] libmachine: (ha-193737)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/boot2docker.iso'/>
	I1001 19:19:48.500448   31154 main.go:141] libmachine: (ha-193737)       <target dev='hdc' bus='scsi'/>
	I1001 19:19:48.500454   31154 main.go:141] libmachine: (ha-193737)       <readonly/>
	I1001 19:19:48.500461   31154 main.go:141] libmachine: (ha-193737)     </disk>
	I1001 19:19:48.500475   31154 main.go:141] libmachine: (ha-193737)     <disk type='file' device='disk'>
	I1001 19:19:48.500485   31154 main.go:141] libmachine: (ha-193737)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 19:19:48.500507   31154 main.go:141] libmachine: (ha-193737)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/ha-193737.rawdisk'/>
	I1001 19:19:48.500514   31154 main.go:141] libmachine: (ha-193737)       <target dev='hda' bus='virtio'/>
	I1001 19:19:48.500519   31154 main.go:141] libmachine: (ha-193737)     </disk>
	I1001 19:19:48.500525   31154 main.go:141] libmachine: (ha-193737)     <interface type='network'>
	I1001 19:19:48.500530   31154 main.go:141] libmachine: (ha-193737)       <source network='mk-ha-193737'/>
	I1001 19:19:48.500536   31154 main.go:141] libmachine: (ha-193737)       <model type='virtio'/>
	I1001 19:19:48.500541   31154 main.go:141] libmachine: (ha-193737)     </interface>
	I1001 19:19:48.500547   31154 main.go:141] libmachine: (ha-193737)     <interface type='network'>
	I1001 19:19:48.500552   31154 main.go:141] libmachine: (ha-193737)       <source network='default'/>
	I1001 19:19:48.500558   31154 main.go:141] libmachine: (ha-193737)       <model type='virtio'/>
	I1001 19:19:48.500570   31154 main.go:141] libmachine: (ha-193737)     </interface>
	I1001 19:19:48.500593   31154 main.go:141] libmachine: (ha-193737)     <serial type='pty'>
	I1001 19:19:48.500606   31154 main.go:141] libmachine: (ha-193737)       <target port='0'/>
	I1001 19:19:48.500616   31154 main.go:141] libmachine: (ha-193737)     </serial>
	I1001 19:19:48.500621   31154 main.go:141] libmachine: (ha-193737)     <console type='pty'>
	I1001 19:19:48.500632   31154 main.go:141] libmachine: (ha-193737)       <target type='serial' port='0'/>
	I1001 19:19:48.500644   31154 main.go:141] libmachine: (ha-193737)     </console>
	I1001 19:19:48.500651   31154 main.go:141] libmachine: (ha-193737)     <rng model='virtio'>
	I1001 19:19:48.500662   31154 main.go:141] libmachine: (ha-193737)       <backend model='random'>/dev/random</backend>
	I1001 19:19:48.500669   31154 main.go:141] libmachine: (ha-193737)     </rng>
	I1001 19:19:48.500674   31154 main.go:141] libmachine: (ha-193737)     
	I1001 19:19:48.500681   31154 main.go:141] libmachine: (ha-193737)     
	I1001 19:19:48.500687   31154 main.go:141] libmachine: (ha-193737)   </devices>
	I1001 19:19:48.500693   31154 main.go:141] libmachine: (ha-193737) </domain>
	I1001 19:19:48.500703   31154 main.go:141] libmachine: (ha-193737) 
	I1001 19:19:48.505062   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:e8:37:5d in network default
	I1001 19:19:48.505636   31154 main.go:141] libmachine: (ha-193737) Ensuring networks are active...
	I1001 19:19:48.505675   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:48.506541   31154 main.go:141] libmachine: (ha-193737) Ensuring network default is active
	I1001 19:19:48.506813   31154 main.go:141] libmachine: (ha-193737) Ensuring network mk-ha-193737 is active
	I1001 19:19:48.507255   31154 main.go:141] libmachine: (ha-193737) Getting domain xml...
	I1001 19:19:48.507904   31154 main.go:141] libmachine: (ha-193737) Creating domain...
	I1001 19:19:49.716659   31154 main.go:141] libmachine: (ha-193737) Waiting to get IP...
	I1001 19:19:49.717406   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:49.717831   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:49.717883   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:49.717825   31177 retry.go:31] will retry after 192.827447ms: waiting for machine to come up
	I1001 19:19:49.912407   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:49.912907   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:49.912957   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:49.912879   31177 retry.go:31] will retry after 258.269769ms: waiting for machine to come up
	I1001 19:19:50.172507   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:50.173033   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:50.173054   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:50.172948   31177 retry.go:31] will retry after 373.637188ms: waiting for machine to come up
	I1001 19:19:50.548615   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:50.549181   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:50.549210   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:50.549112   31177 retry.go:31] will retry after 430.626472ms: waiting for machine to come up
	I1001 19:19:50.981709   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:50.982164   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:50.982197   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:50.982117   31177 retry.go:31] will retry after 529.86174ms: waiting for machine to come up
	I1001 19:19:51.513872   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:51.514354   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:51.514379   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:51.514310   31177 retry.go:31] will retry after 925.92584ms: waiting for machine to come up
	I1001 19:19:52.441513   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:52.442015   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:52.442079   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:52.441913   31177 retry.go:31] will retry after 1.034076263s: waiting for machine to come up
	I1001 19:19:53.477995   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:53.478427   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:53.478449   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:53.478392   31177 retry.go:31] will retry after 1.13194403s: waiting for machine to come up
	I1001 19:19:54.612551   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:54.613118   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:54.613140   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:54.613054   31177 retry.go:31] will retry after 1.647034063s: waiting for machine to come up
	I1001 19:19:56.262733   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:56.263161   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:56.263186   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:56.263102   31177 retry.go:31] will retry after 1.500997099s: waiting for machine to come up
	I1001 19:19:57.765863   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:57.766323   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:57.766356   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:57.766274   31177 retry.go:31] will retry after 2.455749683s: waiting for machine to come up
	I1001 19:20:00.223334   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:00.223743   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:20:00.223759   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:20:00.223705   31177 retry.go:31] will retry after 2.437856543s: waiting for machine to come up
	I1001 19:20:02.664433   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:02.664809   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:20:02.664832   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:20:02.664763   31177 retry.go:31] will retry after 3.902681899s: waiting for machine to come up
	I1001 19:20:06.571440   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:06.571775   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:20:06.571797   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:20:06.571730   31177 retry.go:31] will retry after 5.423043301s: waiting for machine to come up
	I1001 19:20:11.999360   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:11.999779   31154 main.go:141] libmachine: (ha-193737) Found IP for machine: 192.168.39.14
	I1001 19:20:11.999815   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has current primary IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:11.999824   31154 main.go:141] libmachine: (ha-193737) Reserving static IP address...
	I1001 19:20:12.000199   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find host DHCP lease matching {name: "ha-193737", mac: "52:54:00:80:2b:09", ip: "192.168.39.14"} in network mk-ha-193737
	I1001 19:20:12.077653   31154 main.go:141] libmachine: (ha-193737) Reserved static IP address: 192.168.39.14
	I1001 19:20:12.077732   31154 main.go:141] libmachine: (ha-193737) DBG | Getting to WaitForSSH function...
	I1001 19:20:12.077743   31154 main.go:141] libmachine: (ha-193737) Waiting for SSH to be available...
	I1001 19:20:12.080321   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.080865   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:minikube Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.080898   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.081006   31154 main.go:141] libmachine: (ha-193737) DBG | Using SSH client type: external
	I1001 19:20:12.081047   31154 main.go:141] libmachine: (ha-193737) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa (-rw-------)
	I1001 19:20:12.081075   31154 main.go:141] libmachine: (ha-193737) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 19:20:12.081085   31154 main.go:141] libmachine: (ha-193737) DBG | About to run SSH command:
	I1001 19:20:12.081096   31154 main.go:141] libmachine: (ha-193737) DBG | exit 0
	I1001 19:20:12.208487   31154 main.go:141] libmachine: (ha-193737) DBG | SSH cmd err, output: <nil>: 
	I1001 19:20:12.208725   31154 main.go:141] libmachine: (ha-193737) KVM machine creation complete!
	I1001 19:20:12.209102   31154 main.go:141] libmachine: (ha-193737) Calling .GetConfigRaw
	I1001 19:20:12.209646   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:12.209809   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:12.209935   31154 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 19:20:12.209949   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:20:12.211166   31154 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 19:20:12.211190   31154 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 19:20:12.211195   31154 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 19:20:12.211201   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.213529   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.213857   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.213883   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.213972   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:12.214116   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.214264   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.214394   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:12.214556   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:12.214781   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:12.214795   31154 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 19:20:12.319892   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:20:12.319913   31154 main.go:141] libmachine: Detecting the provisioner...
	I1001 19:20:12.319921   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.322718   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.323165   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.323192   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.323331   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:12.323522   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.323695   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.323840   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:12.324072   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:12.324284   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:12.324296   31154 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 19:20:12.429264   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 19:20:12.429335   31154 main.go:141] libmachine: found compatible host: buildroot
	I1001 19:20:12.429344   31154 main.go:141] libmachine: Provisioning with buildroot...
	I1001 19:20:12.429358   31154 main.go:141] libmachine: (ha-193737) Calling .GetMachineName
	I1001 19:20:12.429572   31154 buildroot.go:166] provisioning hostname "ha-193737"
	I1001 19:20:12.429594   31154 main.go:141] libmachine: (ha-193737) Calling .GetMachineName
	I1001 19:20:12.429736   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.432551   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.432897   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.432926   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.433127   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:12.433317   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.433512   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.433661   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:12.433801   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:12.433993   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:12.434007   31154 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-193737 && echo "ha-193737" | sudo tee /etc/hostname
	I1001 19:20:12.557230   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-193737
	
	I1001 19:20:12.557264   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.560034   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.560377   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.560404   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.560580   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:12.560736   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.560897   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.561023   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:12.561173   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:12.561344   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:12.561360   31154 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-193737' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-193737/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-193737' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 19:20:12.673716   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:20:12.673759   31154 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 19:20:12.673797   31154 buildroot.go:174] setting up certificates
	I1001 19:20:12.673811   31154 provision.go:84] configureAuth start
	I1001 19:20:12.673825   31154 main.go:141] libmachine: (ha-193737) Calling .GetMachineName
	I1001 19:20:12.674136   31154 main.go:141] libmachine: (ha-193737) Calling .GetIP
	I1001 19:20:12.676892   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.677280   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.677321   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.677483   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.679978   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.680305   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.680326   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.680487   31154 provision.go:143] copyHostCerts
	I1001 19:20:12.680516   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:20:12.680561   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 19:20:12.680573   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:20:12.680654   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 19:20:12.680751   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:20:12.680775   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 19:20:12.680787   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:20:12.680824   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 19:20:12.680885   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:20:12.680909   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 19:20:12.680917   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:20:12.680951   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 19:20:12.681013   31154 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.ha-193737 san=[127.0.0.1 192.168.39.14 ha-193737 localhost minikube]
	I1001 19:20:12.842484   31154 provision.go:177] copyRemoteCerts
	I1001 19:20:12.842574   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 19:20:12.842621   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.845898   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.846287   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.846310   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.846561   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:12.846731   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.846941   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:12.847077   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:12.930698   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 19:20:12.930795   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 19:20:12.955852   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 19:20:12.955930   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1001 19:20:12.979656   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 19:20:12.979722   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 19:20:13.003473   31154 provision.go:87] duration metric: took 329.649424ms to configureAuth
	I1001 19:20:13.003500   31154 buildroot.go:189] setting minikube options for container-runtime
	I1001 19:20:13.003695   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:20:13.003768   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:13.006651   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.006965   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.006994   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.007204   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:13.007396   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.007569   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.007765   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:13.007963   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:13.008170   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:13.008194   31154 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 19:20:13.223895   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 19:20:13.223928   31154 main.go:141] libmachine: Checking connection to Docker...
	I1001 19:20:13.223938   31154 main.go:141] libmachine: (ha-193737) Calling .GetURL
	I1001 19:20:13.225295   31154 main.go:141] libmachine: (ha-193737) DBG | Using libvirt version 6000000
	I1001 19:20:13.227525   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.227866   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.227899   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.227999   31154 main.go:141] libmachine: Docker is up and running!
	I1001 19:20:13.228014   31154 main.go:141] libmachine: Reticulating splines...
	I1001 19:20:13.228022   31154 client.go:171] duration metric: took 25.333507515s to LocalClient.Create
	I1001 19:20:13.228041   31154 start.go:167] duration metric: took 25.333560566s to libmachine.API.Create "ha-193737"
	I1001 19:20:13.228050   31154 start.go:293] postStartSetup for "ha-193737" (driver="kvm2")
	I1001 19:20:13.228060   31154 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 19:20:13.228083   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:13.228317   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 19:20:13.228339   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:13.230391   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.230709   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.230732   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.230837   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:13.230988   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.231120   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:13.231290   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:13.314353   31154 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 19:20:13.318432   31154 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 19:20:13.318458   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 19:20:13.318541   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 19:20:13.318638   31154 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 19:20:13.318652   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /etc/ssl/certs/184302.pem
	I1001 19:20:13.318780   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 19:20:13.328571   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:20:13.353035   31154 start.go:296] duration metric: took 124.970717ms for postStartSetup
	I1001 19:20:13.353110   31154 main.go:141] libmachine: (ha-193737) Calling .GetConfigRaw
	I1001 19:20:13.353736   31154 main.go:141] libmachine: (ha-193737) Calling .GetIP
	I1001 19:20:13.356423   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.356817   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.356852   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.357086   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:20:13.357278   31154 start.go:128] duration metric: took 25.480687424s to createHost
	I1001 19:20:13.357297   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:13.359783   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.360160   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.360189   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.360384   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:13.360591   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.360774   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.360932   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:13.361105   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:13.361274   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:13.361289   31154 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 19:20:13.464991   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727810413.446268696
	
	I1001 19:20:13.465023   31154 fix.go:216] guest clock: 1727810413.446268696
	I1001 19:20:13.465037   31154 fix.go:229] Guest: 2024-10-01 19:20:13.446268696 +0000 UTC Remote: 2024-10-01 19:20:13.35728811 +0000 UTC m=+25.585126920 (delta=88.980586ms)
	I1001 19:20:13.465070   31154 fix.go:200] guest clock delta is within tolerance: 88.980586ms
	I1001 19:20:13.465076   31154 start.go:83] releasing machines lock for "ha-193737", held for 25.588575039s
	I1001 19:20:13.465101   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:13.465340   31154 main.go:141] libmachine: (ha-193737) Calling .GetIP
	I1001 19:20:13.468083   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.468419   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.468447   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.468613   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:13.469143   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:13.469301   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:13.469362   31154 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 19:20:13.469413   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:13.469528   31154 ssh_runner.go:195] Run: cat /version.json
	I1001 19:20:13.469548   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:13.471980   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.472049   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.472309   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.472339   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.472393   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.472414   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.472482   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:13.472622   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:13.472666   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.472784   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.472833   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:13.472925   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:13.472991   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:13.473062   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:13.597462   31154 ssh_runner.go:195] Run: systemctl --version
	I1001 19:20:13.603452   31154 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 19:20:13.764276   31154 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 19:20:13.770676   31154 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 19:20:13.770753   31154 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 19:20:13.785990   31154 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 19:20:13.786018   31154 start.go:495] detecting cgroup driver to use...
	I1001 19:20:13.786088   31154 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 19:20:13.802042   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 19:20:13.815442   31154 docker.go:217] disabling cri-docker service (if available) ...
	I1001 19:20:13.815514   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 19:20:13.829012   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 19:20:13.842769   31154 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 19:20:13.956694   31154 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 19:20:14.102874   31154 docker.go:233] disabling docker service ...
	I1001 19:20:14.102940   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 19:20:14.117261   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 19:20:14.129985   31154 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 19:20:14.273597   31154 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 19:20:14.384529   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 19:20:14.397753   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 19:20:14.415792   31154 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 19:20:14.415850   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.426007   31154 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 19:20:14.426087   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.436393   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.446247   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.456029   31154 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 19:20:14.466078   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.475781   31154 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.492551   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.502706   31154 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 19:20:14.512290   31154 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 19:20:14.512379   31154 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 19:20:14.525913   31154 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 19:20:14.535543   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:20:14.653960   31154 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 19:20:14.741173   31154 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 19:20:14.741263   31154 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 19:20:14.745800   31154 start.go:563] Will wait 60s for crictl version
	I1001 19:20:14.745869   31154 ssh_runner.go:195] Run: which crictl
	I1001 19:20:14.749449   31154 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 19:20:14.789074   31154 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 19:20:14.789159   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:20:14.820545   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:20:14.849920   31154 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 19:20:14.850894   31154 main.go:141] libmachine: (ha-193737) Calling .GetIP
	I1001 19:20:14.853389   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:14.853698   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:14.853724   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:14.853935   31154 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 19:20:14.857967   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:20:14.870673   31154 kubeadm.go:883] updating cluster {Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 19:20:14.870794   31154 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:20:14.870846   31154 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 19:20:14.901722   31154 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1001 19:20:14.901791   31154 ssh_runner.go:195] Run: which lz4
	I1001 19:20:14.905716   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1001 19:20:14.905869   31154 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 19:20:14.909954   31154 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 19:20:14.909985   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1001 19:20:16.176019   31154 crio.go:462] duration metric: took 1.270229445s to copy over tarball
	I1001 19:20:16.176091   31154 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 19:20:18.196924   31154 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.020807915s)
	I1001 19:20:18.196955   31154 crio.go:469] duration metric: took 2.020904101s to extract the tarball
	I1001 19:20:18.196963   31154 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 19:20:18.232395   31154 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 19:20:18.277292   31154 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 19:20:18.277310   31154 cache_images.go:84] Images are preloaded, skipping loading
	I1001 19:20:18.277317   31154 kubeadm.go:934] updating node { 192.168.39.14 8443 v1.31.1 crio true true} ...
	I1001 19:20:18.277404   31154 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-193737 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 19:20:18.277469   31154 ssh_runner.go:195] Run: crio config
	I1001 19:20:18.320909   31154 cni.go:84] Creating CNI manager for ""
	I1001 19:20:18.320940   31154 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1001 19:20:18.320955   31154 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 19:20:18.320983   31154 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.14 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-193737 NodeName:ha-193737 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 19:20:18.321130   31154 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-193737"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 19:20:18.321154   31154 kube-vip.go:115] generating kube-vip config ...
	I1001 19:20:18.321192   31154 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 19:20:18.337979   31154 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 19:20:18.338099   31154 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1001 19:20:18.338161   31154 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 19:20:18.347788   31154 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 19:20:18.347864   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1001 19:20:18.356907   31154 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1001 19:20:18.372922   31154 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 19:20:18.388904   31154 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1001 19:20:18.404938   31154 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1001 19:20:18.421257   31154 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 19:20:18.425122   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:20:18.436829   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:20:18.545073   31154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:20:18.560862   31154 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737 for IP: 192.168.39.14
	I1001 19:20:18.560887   31154 certs.go:194] generating shared ca certs ...
	I1001 19:20:18.560910   31154 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:18.561104   31154 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 19:20:18.561167   31154 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 19:20:18.561182   31154 certs.go:256] generating profile certs ...
	I1001 19:20:18.561249   31154 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key
	I1001 19:20:18.561277   31154 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt with IP's: []
	I1001 19:20:19.147252   31154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt ...
	I1001 19:20:19.147288   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt: {Name:mk6cc12194e2b1b488446b45fb57531c12b19cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.147481   31154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key ...
	I1001 19:20:19.147500   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key: {Name:mk1f7ee6c9ea6b8fcc952a031324588416a57469 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.147599   31154 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.c3487d2e
	I1001 19:20:19.147622   31154 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.c3487d2e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.14 192.168.39.254]
	I1001 19:20:19.274032   31154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.c3487d2e ...
	I1001 19:20:19.274061   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.c3487d2e: {Name:mk19f3cf4cd1f2fca54e40738408be6aa73421ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.274224   31154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.c3487d2e ...
	I1001 19:20:19.274242   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.c3487d2e: {Name:mk2ba24a36a70c8a6e47019bdcda573a26500b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.274335   31154 certs.go:381] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.c3487d2e -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt
	I1001 19:20:19.274441   31154 certs.go:385] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.c3487d2e -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key
	I1001 19:20:19.274522   31154 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key
	I1001 19:20:19.274541   31154 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt with IP's: []
	I1001 19:20:19.432987   31154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt ...
	I1001 19:20:19.433018   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt: {Name:mkaa29f743f43e700e39d0141b3a793971db9bab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.433198   31154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key ...
	I1001 19:20:19.433215   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key: {Name:mkda8f4e7f39ac52933dd1a3f0855317051465de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.433333   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 19:20:19.433358   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 19:20:19.433374   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 19:20:19.433394   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 19:20:19.433411   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 19:20:19.433428   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 19:20:19.433441   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 19:20:19.433457   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 19:20:19.433541   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 19:20:19.433593   31154 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 19:20:19.433606   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 19:20:19.433643   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 19:20:19.433673   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 19:20:19.433703   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 19:20:19.433758   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:20:19.433792   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem -> /usr/share/ca-certificates/18430.pem
	I1001 19:20:19.433812   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /usr/share/ca-certificates/184302.pem
	I1001 19:20:19.433830   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:20:19.434414   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 19:20:19.462971   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 19:20:19.486817   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 19:20:19.510214   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 19:20:19.536715   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1001 19:20:19.562219   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 19:20:19.587563   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 19:20:19.611975   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 19:20:19.635789   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 19:20:19.660541   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 19:20:19.686922   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 19:20:19.713247   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 19:20:19.737109   31154 ssh_runner.go:195] Run: openssl version
	I1001 19:20:19.743466   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 19:20:19.755116   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 19:20:19.760240   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 19:20:19.760326   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 19:20:19.767474   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 19:20:19.779182   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 19:20:19.790431   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 19:20:19.795533   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 19:20:19.795593   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 19:20:19.801533   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 19:20:19.812537   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 19:20:19.823577   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:20:19.828798   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:20:19.828870   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:20:19.835152   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 19:20:19.846376   31154 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 19:20:19.850628   31154 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 19:20:19.850680   31154 kubeadm.go:392] StartCluster: {Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:20:19.850761   31154 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 19:20:19.850812   31154 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 19:20:19.892830   31154 cri.go:89] found id: ""
	I1001 19:20:19.892895   31154 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 19:20:19.902960   31154 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 19:20:19.913367   31154 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 19:20:19.923292   31154 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 19:20:19.923330   31154 kubeadm.go:157] found existing configuration files:
	
	I1001 19:20:19.923388   31154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 19:20:19.932878   31154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 19:20:19.932945   31154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 19:20:19.943333   31154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 19:20:19.952676   31154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 19:20:19.952738   31154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 19:20:19.962992   31154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 19:20:19.972649   31154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 19:20:19.972735   31154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 19:20:19.982834   31154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 19:20:19.993409   31154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 19:20:19.993469   31154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 19:20:20.002988   31154 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 19:20:20.127435   31154 kubeadm.go:310] W1001 19:20:20.114172     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 19:20:20.128326   31154 kubeadm.go:310] W1001 19:20:20.115365     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 19:20:20.262781   31154 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 19:20:31.543814   31154 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 19:20:31.543907   31154 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 19:20:31.543995   31154 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 19:20:31.544073   31154 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 19:20:31.544148   31154 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 19:20:31.544203   31154 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 19:20:31.545532   31154 out.go:235]   - Generating certificates and keys ...
	I1001 19:20:31.545611   31154 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 19:20:31.545691   31154 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 19:20:31.545778   31154 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 19:20:31.545854   31154 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 19:20:31.545932   31154 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 19:20:31.546012   31154 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 19:20:31.546085   31154 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 19:20:31.546175   31154 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-193737 localhost] and IPs [192.168.39.14 127.0.0.1 ::1]
	I1001 19:20:31.546218   31154 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 19:20:31.546369   31154 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-193737 localhost] and IPs [192.168.39.14 127.0.0.1 ::1]
	I1001 19:20:31.546436   31154 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 19:20:31.546488   31154 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 19:20:31.546527   31154 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 19:20:31.546577   31154 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 19:20:31.546623   31154 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 19:20:31.546668   31154 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 19:20:31.546722   31154 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 19:20:31.546817   31154 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 19:20:31.546863   31154 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 19:20:31.546932   31154 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 19:20:31.547004   31154 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 19:20:31.549095   31154 out.go:235]   - Booting up control plane ...
	I1001 19:20:31.549193   31154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 19:20:31.549275   31154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 19:20:31.549365   31154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 19:20:31.549456   31154 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 19:20:31.549553   31154 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 19:20:31.549596   31154 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 19:20:31.549707   31154 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 19:20:31.549790   31154 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 19:20:31.549840   31154 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.357694ms
	I1001 19:20:31.549900   31154 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 19:20:31.549947   31154 kubeadm.go:310] [api-check] The API server is healthy after 6.04683454s
	I1001 19:20:31.550033   31154 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 19:20:31.550189   31154 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 19:20:31.550277   31154 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 19:20:31.550430   31154 kubeadm.go:310] [mark-control-plane] Marking the node ha-193737 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 19:20:31.550487   31154 kubeadm.go:310] [bootstrap-token] Using token: 7by4e8.7cs25dkxb8txjdft
	I1001 19:20:31.551753   31154 out.go:235]   - Configuring RBAC rules ...
	I1001 19:20:31.551859   31154 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 19:20:31.551994   31154 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 19:20:31.552131   31154 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 19:20:31.552254   31154 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 19:20:31.552369   31154 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 19:20:31.552467   31154 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 19:20:31.552576   31154 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 19:20:31.552620   31154 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 19:20:31.552661   31154 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 19:20:31.552670   31154 kubeadm.go:310] 
	I1001 19:20:31.552724   31154 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 19:20:31.552736   31154 kubeadm.go:310] 
	I1001 19:20:31.552812   31154 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 19:20:31.552820   31154 kubeadm.go:310] 
	I1001 19:20:31.552841   31154 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 19:20:31.552936   31154 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 19:20:31.553000   31154 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 19:20:31.553018   31154 kubeadm.go:310] 
	I1001 19:20:31.553076   31154 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 19:20:31.553082   31154 kubeadm.go:310] 
	I1001 19:20:31.553119   31154 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 19:20:31.553125   31154 kubeadm.go:310] 
	I1001 19:20:31.553165   31154 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 19:20:31.553231   31154 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 19:20:31.553309   31154 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 19:20:31.553319   31154 kubeadm.go:310] 
	I1001 19:20:31.553382   31154 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 19:20:31.553446   31154 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 19:20:31.553452   31154 kubeadm.go:310] 
	I1001 19:20:31.553515   31154 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7by4e8.7cs25dkxb8txjdft \
	I1001 19:20:31.553595   31154 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 \
	I1001 19:20:31.553612   31154 kubeadm.go:310] 	--control-plane 
	I1001 19:20:31.553616   31154 kubeadm.go:310] 
	I1001 19:20:31.553679   31154 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 19:20:31.553686   31154 kubeadm.go:310] 
	I1001 19:20:31.553757   31154 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7by4e8.7cs25dkxb8txjdft \
	I1001 19:20:31.553878   31154 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 
	I1001 19:20:31.553899   31154 cni.go:84] Creating CNI manager for ""
	I1001 19:20:31.553906   31154 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1001 19:20:31.555354   31154 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1001 19:20:31.556734   31154 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1001 19:20:31.562528   31154 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1001 19:20:31.562546   31154 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1001 19:20:31.584306   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1001 19:20:31.963746   31154 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 19:20:31.963826   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:31.963839   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-193737 minikube.k8s.io/updated_at=2024_10_01T19_20_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=ha-193737 minikube.k8s.io/primary=true
	I1001 19:20:32.001753   31154 ops.go:34] apiserver oom_adj: -16
	I1001 19:20:32.132202   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:32.632805   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:33.133195   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:33.633216   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:34.132915   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:34.632316   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:35.132491   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:35.632537   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:36.132620   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:36.218756   31154 kubeadm.go:1113] duration metric: took 4.255002576s to wait for elevateKubeSystemPrivileges
	I1001 19:20:36.218788   31154 kubeadm.go:394] duration metric: took 16.368111595s to StartCluster
	I1001 19:20:36.218804   31154 settings.go:142] acquiring lock: {Name:mkeb3fe64ed992373f048aef24eb0d675bcab60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:36.218873   31154 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:20:36.219494   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/kubeconfig: {Name:mk7ef19d546928001abf2478a3abe1d17765a591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:36.219713   31154 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:20:36.219727   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 19:20:36.219734   31154 start.go:241] waiting for startup goroutines ...
	I1001 19:20:36.219741   31154 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 19:20:36.219834   31154 addons.go:69] Setting storage-provisioner=true in profile "ha-193737"
	I1001 19:20:36.219856   31154 addons.go:234] Setting addon storage-provisioner=true in "ha-193737"
	I1001 19:20:36.219869   31154 addons.go:69] Setting default-storageclass=true in profile "ha-193737"
	I1001 19:20:36.219886   31154 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-193737"
	I1001 19:20:36.219893   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:20:36.219970   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:20:36.220394   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:36.220428   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:36.220398   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:36.220520   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:36.237915   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41109
	I1001 19:20:36.238065   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40357
	I1001 19:20:36.238375   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:36.238551   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:36.238872   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:36.238891   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:36.239076   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:36.239108   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:36.239214   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:36.239454   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:36.239611   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:20:36.239781   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:36.239809   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:36.241737   31154 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:20:36.241972   31154 kapi.go:59] client config for ha-193737: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key", CAFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 19:20:36.242414   31154 cert_rotation.go:140] Starting client certificate rotation controller
	I1001 19:20:36.242541   31154 addons.go:234] Setting addon default-storageclass=true in "ha-193737"
	I1001 19:20:36.242580   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:20:36.242883   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:36.242931   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:36.258780   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39715
	I1001 19:20:36.259292   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:36.259824   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:36.259850   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:36.260262   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:36.260587   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:20:36.262369   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37495
	I1001 19:20:36.262435   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:36.263083   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:36.263600   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:36.263628   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:36.264019   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:36.264582   31154 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 19:20:36.264749   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:36.264788   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:36.265963   31154 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 19:20:36.265987   31154 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 19:20:36.266008   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:36.270544   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:36.271199   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:36.271222   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:36.271425   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:36.271642   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:36.271818   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:36.272058   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:36.283812   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42859
	I1001 19:20:36.284387   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:36.284896   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:36.284913   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:36.285508   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:36.285834   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:20:36.288106   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:36.288393   31154 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 19:20:36.288414   31154 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 19:20:36.288437   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:36.291938   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:36.292436   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:36.292463   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:36.292681   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:36.292858   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:36.293020   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:36.293164   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:36.379914   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 19:20:36.401549   31154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 19:20:36.450371   31154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 19:20:36.756603   31154 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1001 19:20:37.190467   31154 main.go:141] libmachine: Making call to close driver server
	I1001 19:20:37.190501   31154 main.go:141] libmachine: (ha-193737) Calling .Close
	I1001 19:20:37.190537   31154 main.go:141] libmachine: Making call to close driver server
	I1001 19:20:37.190556   31154 main.go:141] libmachine: (ha-193737) Calling .Close
	I1001 19:20:37.190812   31154 main.go:141] libmachine: Successfully made call to close driver server
	I1001 19:20:37.190821   31154 main.go:141] libmachine: Successfully made call to close driver server
	I1001 19:20:37.190830   31154 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 19:20:37.190833   31154 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 19:20:37.190839   31154 main.go:141] libmachine: Making call to close driver server
	I1001 19:20:37.190841   31154 main.go:141] libmachine: Making call to close driver server
	I1001 19:20:37.190847   31154 main.go:141] libmachine: (ha-193737) Calling .Close
	I1001 19:20:37.190848   31154 main.go:141] libmachine: (ha-193737) Calling .Close
	I1001 19:20:37.191111   31154 main.go:141] libmachine: Successfully made call to close driver server
	I1001 19:20:37.191115   31154 main.go:141] libmachine: Successfully made call to close driver server
	I1001 19:20:37.191125   31154 main.go:141] libmachine: (ha-193737) DBG | Closing plugin on server side
	I1001 19:20:37.191134   31154 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 19:20:37.191134   31154 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 19:20:37.191205   31154 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1001 19:20:37.191222   31154 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1001 19:20:37.191338   31154 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1001 19:20:37.191344   31154 round_trippers.go:469] Request Headers:
	I1001 19:20:37.191354   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:20:37.191358   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:20:37.219411   31154 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I1001 19:20:37.219983   31154 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1001 19:20:37.219997   31154 round_trippers.go:469] Request Headers:
	I1001 19:20:37.220005   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:20:37.220008   31154 round_trippers.go:473]     Content-Type: application/json
	I1001 19:20:37.220011   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:20:37.228402   31154 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1001 19:20:37.228596   31154 main.go:141] libmachine: Making call to close driver server
	I1001 19:20:37.228610   31154 main.go:141] libmachine: (ha-193737) Calling .Close
	I1001 19:20:37.228929   31154 main.go:141] libmachine: Successfully made call to close driver server
	I1001 19:20:37.228950   31154 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 19:20:37.228974   31154 main.go:141] libmachine: (ha-193737) DBG | Closing plugin on server side
	I1001 19:20:37.230600   31154 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1001 19:20:37.231770   31154 addons.go:510] duration metric: took 1.012023889s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1001 19:20:37.231812   31154 start.go:246] waiting for cluster config update ...
	I1001 19:20:37.231823   31154 start.go:255] writing updated cluster config ...
	I1001 19:20:37.233187   31154 out.go:201] 
	I1001 19:20:37.234563   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:20:37.234629   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:20:37.236253   31154 out.go:177] * Starting "ha-193737-m02" control-plane node in "ha-193737" cluster
	I1001 19:20:37.237974   31154 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:20:37.238007   31154 cache.go:56] Caching tarball of preloaded images
	I1001 19:20:37.238089   31154 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 19:20:37.238106   31154 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 19:20:37.238204   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:20:37.238426   31154 start.go:360] acquireMachinesLock for ha-193737-m02: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 19:20:37.238490   31154 start.go:364] duration metric: took 37.598µs to acquireMachinesLock for "ha-193737-m02"
	I1001 19:20:37.238511   31154 start.go:93] Provisioning new machine with config: &{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:20:37.238603   31154 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1001 19:20:37.240050   31154 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 19:20:37.240148   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:37.240181   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:37.256492   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43983
	I1001 19:20:37.257003   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:37.257628   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:37.257663   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:37.258069   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:37.258273   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetMachineName
	I1001 19:20:37.258413   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:37.258584   31154 start.go:159] libmachine.API.Create for "ha-193737" (driver="kvm2")
	I1001 19:20:37.258609   31154 client.go:168] LocalClient.Create starting
	I1001 19:20:37.258644   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem
	I1001 19:20:37.258691   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:20:37.258706   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:20:37.258752   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem
	I1001 19:20:37.258775   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:20:37.258791   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:20:37.258820   31154 main.go:141] libmachine: Running pre-create checks...
	I1001 19:20:37.258831   31154 main.go:141] libmachine: (ha-193737-m02) Calling .PreCreateCheck
	I1001 19:20:37.258981   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetConfigRaw
	I1001 19:20:37.259499   31154 main.go:141] libmachine: Creating machine...
	I1001 19:20:37.259521   31154 main.go:141] libmachine: (ha-193737-m02) Calling .Create
	I1001 19:20:37.259645   31154 main.go:141] libmachine: (ha-193737-m02) Creating KVM machine...
	I1001 19:20:37.261171   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found existing default KVM network
	I1001 19:20:37.261376   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found existing private KVM network mk-ha-193737
	I1001 19:20:37.261582   31154 main.go:141] libmachine: (ha-193737-m02) Setting up store path in /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02 ...
	I1001 19:20:37.261615   31154 main.go:141] libmachine: (ha-193737-m02) Building disk image from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 19:20:37.261632   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:37.261518   31541 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:20:37.261750   31154 main.go:141] libmachine: (ha-193737-m02) Downloading /home/jenkins/minikube-integration/19736-11198/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 19:20:37.511803   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:37.511639   31541 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa...
	I1001 19:20:37.705703   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:37.705550   31541 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/ha-193737-m02.rawdisk...
	I1001 19:20:37.705738   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Writing magic tar header
	I1001 19:20:37.705753   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Writing SSH key tar header
	I1001 19:20:37.705765   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:37.705670   31541 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02 ...
	I1001 19:20:37.705777   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02 (perms=drwx------)
	I1001 19:20:37.705791   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines (perms=drwxr-xr-x)
	I1001 19:20:37.705802   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02
	I1001 19:20:37.705808   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube (perms=drwxr-xr-x)
	I1001 19:20:37.705819   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198 (perms=drwxrwxr-x)
	I1001 19:20:37.705827   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 19:20:37.705840   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 19:20:37.705857   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines
	I1001 19:20:37.705865   31154 main.go:141] libmachine: (ha-193737-m02) Creating domain...
	I1001 19:20:37.705882   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:20:37.705895   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198
	I1001 19:20:37.705908   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 19:20:37.705917   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins
	I1001 19:20:37.705926   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home
	I1001 19:20:37.705934   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Skipping /home - not owner
	I1001 19:20:37.706847   31154 main.go:141] libmachine: (ha-193737-m02) define libvirt domain using xml: 
	I1001 19:20:37.706866   31154 main.go:141] libmachine: (ha-193737-m02) <domain type='kvm'>
	I1001 19:20:37.706875   31154 main.go:141] libmachine: (ha-193737-m02)   <name>ha-193737-m02</name>
	I1001 19:20:37.706882   31154 main.go:141] libmachine: (ha-193737-m02)   <memory unit='MiB'>2200</memory>
	I1001 19:20:37.706889   31154 main.go:141] libmachine: (ha-193737-m02)   <vcpu>2</vcpu>
	I1001 19:20:37.706899   31154 main.go:141] libmachine: (ha-193737-m02)   <features>
	I1001 19:20:37.706907   31154 main.go:141] libmachine: (ha-193737-m02)     <acpi/>
	I1001 19:20:37.706913   31154 main.go:141] libmachine: (ha-193737-m02)     <apic/>
	I1001 19:20:37.706921   31154 main.go:141] libmachine: (ha-193737-m02)     <pae/>
	I1001 19:20:37.706927   31154 main.go:141] libmachine: (ha-193737-m02)     
	I1001 19:20:37.706935   31154 main.go:141] libmachine: (ha-193737-m02)   </features>
	I1001 19:20:37.706943   31154 main.go:141] libmachine: (ha-193737-m02)   <cpu mode='host-passthrough'>
	I1001 19:20:37.706947   31154 main.go:141] libmachine: (ha-193737-m02)   
	I1001 19:20:37.706951   31154 main.go:141] libmachine: (ha-193737-m02)   </cpu>
	I1001 19:20:37.706958   31154 main.go:141] libmachine: (ha-193737-m02)   <os>
	I1001 19:20:37.706963   31154 main.go:141] libmachine: (ha-193737-m02)     <type>hvm</type>
	I1001 19:20:37.706969   31154 main.go:141] libmachine: (ha-193737-m02)     <boot dev='cdrom'/>
	I1001 19:20:37.706979   31154 main.go:141] libmachine: (ha-193737-m02)     <boot dev='hd'/>
	I1001 19:20:37.706999   31154 main.go:141] libmachine: (ha-193737-m02)     <bootmenu enable='no'/>
	I1001 19:20:37.707014   31154 main.go:141] libmachine: (ha-193737-m02)   </os>
	I1001 19:20:37.707026   31154 main.go:141] libmachine: (ha-193737-m02)   <devices>
	I1001 19:20:37.707037   31154 main.go:141] libmachine: (ha-193737-m02)     <disk type='file' device='cdrom'>
	I1001 19:20:37.707052   31154 main.go:141] libmachine: (ha-193737-m02)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/boot2docker.iso'/>
	I1001 19:20:37.707067   31154 main.go:141] libmachine: (ha-193737-m02)       <target dev='hdc' bus='scsi'/>
	I1001 19:20:37.707078   31154 main.go:141] libmachine: (ha-193737-m02)       <readonly/>
	I1001 19:20:37.707090   31154 main.go:141] libmachine: (ha-193737-m02)     </disk>
	I1001 19:20:37.707105   31154 main.go:141] libmachine: (ha-193737-m02)     <disk type='file' device='disk'>
	I1001 19:20:37.707118   31154 main.go:141] libmachine: (ha-193737-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 19:20:37.707132   31154 main.go:141] libmachine: (ha-193737-m02)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/ha-193737-m02.rawdisk'/>
	I1001 19:20:37.707142   31154 main.go:141] libmachine: (ha-193737-m02)       <target dev='hda' bus='virtio'/>
	I1001 19:20:37.707150   31154 main.go:141] libmachine: (ha-193737-m02)     </disk>
	I1001 19:20:37.707164   31154 main.go:141] libmachine: (ha-193737-m02)     <interface type='network'>
	I1001 19:20:37.707176   31154 main.go:141] libmachine: (ha-193737-m02)       <source network='mk-ha-193737'/>
	I1001 19:20:37.707186   31154 main.go:141] libmachine: (ha-193737-m02)       <model type='virtio'/>
	I1001 19:20:37.707196   31154 main.go:141] libmachine: (ha-193737-m02)     </interface>
	I1001 19:20:37.707206   31154 main.go:141] libmachine: (ha-193737-m02)     <interface type='network'>
	I1001 19:20:37.707217   31154 main.go:141] libmachine: (ha-193737-m02)       <source network='default'/>
	I1001 19:20:37.707227   31154 main.go:141] libmachine: (ha-193737-m02)       <model type='virtio'/>
	I1001 19:20:37.707241   31154 main.go:141] libmachine: (ha-193737-m02)     </interface>
	I1001 19:20:37.707259   31154 main.go:141] libmachine: (ha-193737-m02)     <serial type='pty'>
	I1001 19:20:37.707267   31154 main.go:141] libmachine: (ha-193737-m02)       <target port='0'/>
	I1001 19:20:37.707272   31154 main.go:141] libmachine: (ha-193737-m02)     </serial>
	I1001 19:20:37.707279   31154 main.go:141] libmachine: (ha-193737-m02)     <console type='pty'>
	I1001 19:20:37.707283   31154 main.go:141] libmachine: (ha-193737-m02)       <target type='serial' port='0'/>
	I1001 19:20:37.707290   31154 main.go:141] libmachine: (ha-193737-m02)     </console>
	I1001 19:20:37.707295   31154 main.go:141] libmachine: (ha-193737-m02)     <rng model='virtio'>
	I1001 19:20:37.707303   31154 main.go:141] libmachine: (ha-193737-m02)       <backend model='random'>/dev/random</backend>
	I1001 19:20:37.707306   31154 main.go:141] libmachine: (ha-193737-m02)     </rng>
	I1001 19:20:37.707313   31154 main.go:141] libmachine: (ha-193737-m02)     
	I1001 19:20:37.707317   31154 main.go:141] libmachine: (ha-193737-m02)     
	I1001 19:20:37.707323   31154 main.go:141] libmachine: (ha-193737-m02)   </devices>
	I1001 19:20:37.707331   31154 main.go:141] libmachine: (ha-193737-m02) </domain>
	I1001 19:20:37.707362   31154 main.go:141] libmachine: (ha-193737-m02) 
	I1001 19:20:37.714050   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:2e:69:af in network default
	I1001 19:20:37.714587   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:37.714605   31154 main.go:141] libmachine: (ha-193737-m02) Ensuring networks are active...
	I1001 19:20:37.715386   31154 main.go:141] libmachine: (ha-193737-m02) Ensuring network default is active
	I1001 19:20:37.715688   31154 main.go:141] libmachine: (ha-193737-m02) Ensuring network mk-ha-193737 is active
	I1001 19:20:37.716026   31154 main.go:141] libmachine: (ha-193737-m02) Getting domain xml...
	I1001 19:20:37.716683   31154 main.go:141] libmachine: (ha-193737-m02) Creating domain...
	I1001 19:20:38.946823   31154 main.go:141] libmachine: (ha-193737-m02) Waiting to get IP...
	I1001 19:20:38.947612   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:38.948069   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:38.948111   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:38.948057   31541 retry.go:31] will retry after 211.487702ms: waiting for machine to come up
	I1001 19:20:39.161472   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:39.161945   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:39.161981   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:39.161920   31541 retry.go:31] will retry after 369.29813ms: waiting for machine to come up
	I1001 19:20:39.532486   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:39.533006   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:39.533034   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:39.532951   31541 retry.go:31] will retry after 340.79833ms: waiting for machine to come up
	I1001 19:20:39.875453   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:39.875902   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:39.875928   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:39.875855   31541 retry.go:31] will retry after 558.36179ms: waiting for machine to come up
	I1001 19:20:40.435617   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:40.436128   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:40.436156   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:40.436070   31541 retry.go:31] will retry after 724.412456ms: waiting for machine to come up
	I1001 19:20:41.161753   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:41.162215   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:41.162238   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:41.162183   31541 retry.go:31] will retry after 921.122771ms: waiting for machine to come up
	I1001 19:20:42.085509   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:42.085978   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:42.086002   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:42.085932   31541 retry.go:31] will retry after 886.914683ms: waiting for machine to come up
	I1001 19:20:42.974460   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:42.974900   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:42.974926   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:42.974856   31541 retry.go:31] will retry after 1.455695023s: waiting for machine to come up
	I1001 19:20:44.432773   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:44.433336   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:44.433365   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:44.433292   31541 retry.go:31] will retry after 1.415796379s: waiting for machine to come up
	I1001 19:20:45.850938   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:45.851337   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:45.851357   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:45.851309   31541 retry.go:31] will retry after 1.972979972s: waiting for machine to come up
	I1001 19:20:47.825356   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:47.825785   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:47.825812   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:47.825732   31541 retry.go:31] will retry after 1.92262401s: waiting for machine to come up
	I1001 19:20:49.750763   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:49.751160   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:49.751177   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:49.751137   31541 retry.go:31] will retry after 3.587777506s: waiting for machine to come up
	I1001 19:20:53.340173   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:53.340566   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:53.340617   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:53.340558   31541 retry.go:31] will retry after 3.748563727s: waiting for machine to come up
	I1001 19:20:57.093502   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.094007   31154 main.go:141] libmachine: (ha-193737-m02) Found IP for machine: 192.168.39.27
	I1001 19:20:57.094023   31154 main.go:141] libmachine: (ha-193737-m02) Reserving static IP address...
	I1001 19:20:57.094037   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has current primary IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.094391   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find host DHCP lease matching {name: "ha-193737-m02", mac: "52:54:00:7b:e4:d4", ip: "192.168.39.27"} in network mk-ha-193737
	I1001 19:20:57.171234   31154 main.go:141] libmachine: (ha-193737-m02) Reserved static IP address: 192.168.39.27
	I1001 19:20:57.171257   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Getting to WaitForSSH function...
	I1001 19:20:57.171265   31154 main.go:141] libmachine: (ha-193737-m02) Waiting for SSH to be available...
	I1001 19:20:57.173965   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.174561   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.174594   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.174717   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Using SSH client type: external
	I1001 19:20:57.174748   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa (-rw-------)
	I1001 19:20:57.174779   31154 main.go:141] libmachine: (ha-193737-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.27 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 19:20:57.174794   31154 main.go:141] libmachine: (ha-193737-m02) DBG | About to run SSH command:
	I1001 19:20:57.174810   31154 main.go:141] libmachine: (ha-193737-m02) DBG | exit 0
	I1001 19:20:57.304572   31154 main.go:141] libmachine: (ha-193737-m02) DBG | SSH cmd err, output: <nil>: 
	I1001 19:20:57.304868   31154 main.go:141] libmachine: (ha-193737-m02) KVM machine creation complete!
	I1001 19:20:57.305162   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetConfigRaw
	I1001 19:20:57.305752   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:57.305953   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:57.306163   31154 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 19:20:57.306232   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetState
	I1001 19:20:57.307715   31154 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 19:20:57.307729   31154 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 19:20:57.307736   31154 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 19:20:57.307743   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:57.310409   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.310801   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.310826   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.310956   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:57.311136   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.311267   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.311408   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:57.311603   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:57.311799   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:57.311811   31154 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 19:20:57.423687   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:20:57.423716   31154 main.go:141] libmachine: Detecting the provisioner...
	I1001 19:20:57.423741   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:57.426918   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.427323   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.427358   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.427583   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:57.427788   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.428027   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.428201   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:57.428392   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:57.428632   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:57.428762   31154 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 19:20:57.541173   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 19:20:57.541232   31154 main.go:141] libmachine: found compatible host: buildroot
	I1001 19:20:57.541238   31154 main.go:141] libmachine: Provisioning with buildroot...
	I1001 19:20:57.541245   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetMachineName
	I1001 19:20:57.541504   31154 buildroot.go:166] provisioning hostname "ha-193737-m02"
	I1001 19:20:57.541527   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetMachineName
	I1001 19:20:57.541689   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:57.544406   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.544791   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.544830   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.544962   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:57.545135   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.545283   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.545382   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:57.545543   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:57.545753   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:57.545769   31154 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-193737-m02 && echo "ha-193737-m02" | sudo tee /etc/hostname
	I1001 19:20:57.675116   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-193737-m02
	
	I1001 19:20:57.675147   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:57.678239   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.678600   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.678624   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.678822   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:57.679011   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.679146   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.679254   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:57.679397   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:57.679573   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:57.679599   31154 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-193737-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-193737-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-193737-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 19:20:57.800899   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:20:57.800928   31154 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 19:20:57.800946   31154 buildroot.go:174] setting up certificates
	I1001 19:20:57.800957   31154 provision.go:84] configureAuth start
	I1001 19:20:57.800969   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetMachineName
	I1001 19:20:57.801194   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetIP
	I1001 19:20:57.803613   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.803954   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.803982   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.804134   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:57.806340   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.806657   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.806678   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.806860   31154 provision.go:143] copyHostCerts
	I1001 19:20:57.806892   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:20:57.806929   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 19:20:57.806937   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:20:57.807013   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 19:20:57.807084   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:20:57.807101   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 19:20:57.807107   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:20:57.807131   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 19:20:57.807178   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:20:57.807196   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 19:20:57.807202   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:20:57.807221   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 19:20:57.807269   31154 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.ha-193737-m02 san=[127.0.0.1 192.168.39.27 ha-193737-m02 localhost minikube]
	I1001 19:20:58.056549   31154 provision.go:177] copyRemoteCerts
	I1001 19:20:58.056608   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 19:20:58.056631   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:58.059291   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.059620   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.059653   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.059823   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.060033   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.060174   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.060291   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa Username:docker}
	I1001 19:20:58.146502   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 19:20:58.146577   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 19:20:58.170146   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 19:20:58.170211   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 19:20:58.193090   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 19:20:58.193172   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 19:20:58.215033   31154 provision.go:87] duration metric: took 414.061487ms to configureAuth
	I1001 19:20:58.215067   31154 buildroot.go:189] setting minikube options for container-runtime
	I1001 19:20:58.215250   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:20:58.215327   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:58.218149   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.218497   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.218527   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.218653   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.218868   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.219033   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.219156   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.219300   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:58.219460   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:58.219473   31154 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 19:20:58.470145   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 19:20:58.470178   31154 main.go:141] libmachine: Checking connection to Docker...
	I1001 19:20:58.470189   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetURL
	I1001 19:20:58.471402   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Using libvirt version 6000000
	I1001 19:20:58.474024   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.474371   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.474412   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.474613   31154 main.go:141] libmachine: Docker is up and running!
	I1001 19:20:58.474631   31154 main.go:141] libmachine: Reticulating splines...
	I1001 19:20:58.474639   31154 client.go:171] duration metric: took 21.216022282s to LocalClient.Create
	I1001 19:20:58.474664   31154 start.go:167] duration metric: took 21.216081227s to libmachine.API.Create "ha-193737"
	I1001 19:20:58.474674   31154 start.go:293] postStartSetup for "ha-193737-m02" (driver="kvm2")
	I1001 19:20:58.474687   31154 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 19:20:58.474711   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:58.475026   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 19:20:58.475056   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:58.477612   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.478051   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.478084   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.478170   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.478359   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.478475   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.478613   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa Username:docker}
	I1001 19:20:58.566449   31154 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 19:20:58.570622   31154 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 19:20:58.570648   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 19:20:58.570715   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 19:20:58.570786   31154 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 19:20:58.570798   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /etc/ssl/certs/184302.pem
	I1001 19:20:58.570944   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 19:20:58.579535   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:20:58.601457   31154 start.go:296] duration metric: took 126.771104ms for postStartSetup
	I1001 19:20:58.601513   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetConfigRaw
	I1001 19:20:58.602068   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetIP
	I1001 19:20:58.604495   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.604874   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.604900   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.605223   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:20:58.605434   31154 start.go:128] duration metric: took 21.366818669s to createHost
	I1001 19:20:58.605467   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:58.607650   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.608026   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.608051   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.608184   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.608337   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.608453   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.608557   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.608693   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:58.608837   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:58.608847   31154 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 19:20:58.721980   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727810458.681508368
	
	I1001 19:20:58.722008   31154 fix.go:216] guest clock: 1727810458.681508368
	I1001 19:20:58.722018   31154 fix.go:229] Guest: 2024-10-01 19:20:58.681508368 +0000 UTC Remote: 2024-10-01 19:20:58.605448095 +0000 UTC m=+70.833286913 (delta=76.060273ms)
	I1001 19:20:58.722040   31154 fix.go:200] guest clock delta is within tolerance: 76.060273ms
	I1001 19:20:58.722049   31154 start.go:83] releasing machines lock for "ha-193737-m02", held for 21.483548504s
	I1001 19:20:58.722074   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:58.722316   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetIP
	I1001 19:20:58.725092   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.725406   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.725439   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.727497   31154 out.go:177] * Found network options:
	I1001 19:20:58.728546   31154 out.go:177]   - NO_PROXY=192.168.39.14
	W1001 19:20:58.729434   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 19:20:58.729479   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:58.729929   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:58.730082   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:58.730149   31154 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 19:20:58.730189   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	W1001 19:20:58.730253   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 19:20:58.730326   31154 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 19:20:58.730347   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:58.732847   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.732897   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.733209   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.733238   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.733263   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.733277   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.733405   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.733481   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.733618   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.733656   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.733727   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.733802   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.733822   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa Username:docker}
	I1001 19:20:58.733934   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa Username:docker}
	I1001 19:20:58.972871   31154 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 19:20:58.978194   31154 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 19:20:58.978260   31154 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 19:20:58.994663   31154 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 19:20:58.994684   31154 start.go:495] detecting cgroup driver to use...
	I1001 19:20:58.994738   31154 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 19:20:59.011009   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 19:20:59.025521   31154 docker.go:217] disabling cri-docker service (if available) ...
	I1001 19:20:59.025608   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 19:20:59.039348   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 19:20:59.052807   31154 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 19:20:59.169289   31154 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 19:20:59.334757   31154 docker.go:233] disabling docker service ...
	I1001 19:20:59.334834   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 19:20:59.348035   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 19:20:59.360660   31154 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 19:20:59.486509   31154 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 19:20:59.604588   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 19:20:59.617998   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 19:20:59.635554   31154 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 19:20:59.635626   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.645574   31154 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 19:20:59.645648   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.655487   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.665223   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.674970   31154 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 19:20:59.684872   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.694696   31154 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.710618   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.721089   31154 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 19:20:59.731283   31154 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 19:20:59.731352   31154 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 19:20:59.746274   31154 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 19:20:59.756184   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:20:59.870307   31154 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 19:20:59.956939   31154 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 19:20:59.957022   31154 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 19:20:59.961766   31154 start.go:563] Will wait 60s for crictl version
	I1001 19:20:59.961831   31154 ssh_runner.go:195] Run: which crictl
	I1001 19:20:59.965776   31154 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 19:21:00.010361   31154 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 19:21:00.010446   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:21:00.041083   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:21:00.075668   31154 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 19:21:00.077105   31154 out.go:177]   - env NO_PROXY=192.168.39.14
	I1001 19:21:00.078374   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetIP
	I1001 19:21:00.081375   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:21:00.081679   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:21:00.081711   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:21:00.081983   31154 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 19:21:00.086306   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:21:00.099180   31154 mustload.go:65] Loading cluster: ha-193737
	I1001 19:21:00.099450   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:21:00.099790   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:21:00.099833   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:21:00.115527   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43263
	I1001 19:21:00.116081   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:21:00.116546   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:21:00.116565   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:21:00.116887   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:21:00.117121   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:21:00.118679   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:21:00.118968   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:21:00.119005   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:21:00.133660   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37793
	I1001 19:21:00.134171   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:21:00.134638   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:21:00.134657   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:21:00.134945   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:21:00.135112   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:21:00.135251   31154 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737 for IP: 192.168.39.27
	I1001 19:21:00.135263   31154 certs.go:194] generating shared ca certs ...
	I1001 19:21:00.135281   31154 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:21:00.135407   31154 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 19:21:00.135448   31154 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 19:21:00.135454   31154 certs.go:256] generating profile certs ...
	I1001 19:21:00.135523   31154 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key
	I1001 19:21:00.135547   31154 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.b6f75b80
	I1001 19:21:00.135561   31154 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.b6f75b80 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.14 192.168.39.27 192.168.39.254]
	I1001 19:21:00.686434   31154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.b6f75b80 ...
	I1001 19:21:00.686467   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.b6f75b80: {Name:mkeb01bd9448160d7d89858bc8ed1c53818e2061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:21:00.686650   31154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.b6f75b80 ...
	I1001 19:21:00.686663   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.b6f75b80: {Name:mk3a8c2ce4c29185d261167caf7207467c082c1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:21:00.686733   31154 certs.go:381] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.b6f75b80 -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt
	I1001 19:21:00.686905   31154 certs.go:385] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.b6f75b80 -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key
	I1001 19:21:00.687041   31154 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key
	I1001 19:21:00.687055   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 19:21:00.687068   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 19:21:00.687080   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 19:21:00.687093   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 19:21:00.687105   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 19:21:00.687117   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 19:21:00.687128   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 19:21:00.687140   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 19:21:00.687188   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 19:21:00.687218   31154 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 19:21:00.687227   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 19:21:00.687249   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 19:21:00.687269   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 19:21:00.687290   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 19:21:00.687321   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:21:00.687345   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:21:00.687358   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem -> /usr/share/ca-certificates/18430.pem
	I1001 19:21:00.687370   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /usr/share/ca-certificates/184302.pem
	I1001 19:21:00.687398   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:21:00.690221   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:21:00.690721   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:21:00.690750   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:21:00.690891   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:21:00.691103   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:21:00.691297   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:21:00.691469   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:21:00.764849   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1001 19:21:00.770067   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1001 19:21:00.781099   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1001 19:21:00.785191   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1001 19:21:00.796213   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1001 19:21:00.800405   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1001 19:21:00.810899   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1001 19:21:00.815556   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1001 19:21:00.825792   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1001 19:21:00.830049   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1001 19:21:00.841022   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1001 19:21:00.845622   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1001 19:21:00.857011   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 19:21:00.881387   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 19:21:00.905420   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 19:21:00.930584   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 19:21:00.957479   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1001 19:21:00.982115   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 19:21:01.005996   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 19:21:01.031948   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 19:21:01.059129   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 19:21:01.084143   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 19:21:01.109909   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 19:21:01.133720   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1001 19:21:01.150500   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1001 19:21:01.168599   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1001 19:21:01.185368   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1001 19:21:01.202279   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1001 19:21:01.218930   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1001 19:21:01.235286   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1001 19:21:01.251963   31154 ssh_runner.go:195] Run: openssl version
	I1001 19:21:01.257542   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 19:21:01.268254   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:21:01.272732   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:21:01.272802   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:21:01.278777   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 19:21:01.290880   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 19:21:01.301840   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 19:21:01.306397   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 19:21:01.306469   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 19:21:01.312313   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 19:21:01.322717   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 19:21:01.333015   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 19:21:01.337340   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 19:21:01.337400   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 19:21:01.343033   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 19:21:01.354495   31154 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 19:21:01.358223   31154 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 19:21:01.358275   31154 kubeadm.go:934] updating node {m02 192.168.39.27 8443 v1.31.1 crio true true} ...
	I1001 19:21:01.358349   31154 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-193737-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 19:21:01.358373   31154 kube-vip.go:115] generating kube-vip config ...
	I1001 19:21:01.358405   31154 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 19:21:01.374873   31154 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 19:21:01.374943   31154 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1001 19:21:01.374989   31154 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 19:21:01.384444   31154 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1001 19:21:01.384518   31154 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1001 19:21:01.394161   31154 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1001 19:21:01.394190   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 19:21:01.394191   31154 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1001 19:21:01.394256   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 19:21:01.394189   31154 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1001 19:21:01.398439   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1001 19:21:01.398487   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1001 19:21:02.673266   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 19:21:02.673366   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 19:21:02.678383   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1001 19:21:02.678421   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1001 19:21:02.683681   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 19:21:02.723149   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 19:21:02.723251   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 19:21:02.737865   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1001 19:21:02.737908   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1001 19:21:03.230970   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1001 19:21:03.240943   31154 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1001 19:21:03.257655   31154 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 19:21:03.274741   31154 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1001 19:21:03.291537   31154 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 19:21:03.295338   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:21:03.307165   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:21:03.463069   31154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:21:03.480147   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:21:03.480689   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:21:03.480744   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:21:03.495841   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39579
	I1001 19:21:03.496320   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:21:03.496880   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:21:03.496904   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:21:03.497248   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:21:03.497421   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:21:03.497546   31154 start.go:317] joinCluster: &{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:21:03.497680   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1001 19:21:03.497702   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:21:03.500751   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:21:03.501276   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:21:03.501306   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:21:03.501495   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:21:03.501701   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:21:03.501893   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:21:03.502064   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:21:03.648333   31154 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:21:03.648405   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n692vg.wpdyj1cg443tmqgp --discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-193737-m02 --control-plane --apiserver-advertise-address=192.168.39.27 --apiserver-bind-port=8443"
	I1001 19:21:25.467048   31154 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n692vg.wpdyj1cg443tmqgp --discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-193737-m02 --control-plane --apiserver-advertise-address=192.168.39.27 --apiserver-bind-port=8443": (21.818614216s)
	I1001 19:21:25.467085   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1001 19:21:26.061914   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-193737-m02 minikube.k8s.io/updated_at=2024_10_01T19_21_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=ha-193737 minikube.k8s.io/primary=false
	I1001 19:21:26.203974   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-193737-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1001 19:21:26.315094   31154 start.go:319] duration metric: took 22.817544624s to joinCluster
	I1001 19:21:26.315164   31154 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:21:26.315617   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:21:26.316452   31154 out.go:177] * Verifying Kubernetes components...
	I1001 19:21:26.317646   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:21:26.611377   31154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:21:26.640565   31154 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:21:26.640891   31154 kapi.go:59] client config for ha-193737: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key", CAFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1001 19:21:26.640968   31154 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.14:8443
	I1001 19:21:26.641227   31154 node_ready.go:35] waiting up to 6m0s for node "ha-193737-m02" to be "Ready" ...
	I1001 19:21:26.641356   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:26.641366   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:26.641375   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:26.641380   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:26.653154   31154 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1001 19:21:27.141735   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:27.141756   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:27.141764   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:27.141768   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:27.148495   31154 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1001 19:21:27.641626   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:27.641661   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:27.641672   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:27.641677   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:27.646178   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:28.142172   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:28.142200   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:28.142210   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:28.142216   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:28.146315   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:28.641888   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:28.641917   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:28.641931   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:28.641940   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:28.645578   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:28.646211   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:29.141557   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:29.141582   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:29.141592   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:29.141597   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:29.146956   31154 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1001 19:21:29.641796   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:29.641817   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:29.641824   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:29.641829   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:29.645155   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:30.142079   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:30.142103   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:30.142114   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:30.142119   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:30.145277   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:30.642189   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:30.642209   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:30.642217   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:30.642220   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:30.646863   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:30.647494   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:31.141763   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:31.141784   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:31.141796   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:31.141801   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:31.145813   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:31.641815   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:31.641836   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:31.641847   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:31.641853   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:31.645200   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:32.141448   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:32.141473   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:32.141486   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:32.141493   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:32.145295   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:32.641622   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:32.641643   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:32.641649   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:32.641653   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:32.645174   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:33.141797   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:33.141818   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:33.141826   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:33.141830   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:33.145091   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:33.145688   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:33.641422   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:33.641445   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:33.641454   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:33.641464   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:33.644675   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:34.141560   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:34.141589   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:34.141601   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:34.141607   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:34.145278   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:34.641659   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:34.641678   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:34.641686   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:34.641691   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:34.644811   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:35.142049   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:35.142075   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:35.142083   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:35.142087   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:35.145002   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:35.641531   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:35.641559   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:35.641573   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:35.641586   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:35.644829   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:35.645348   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:36.141635   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:36.141655   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:36.141663   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:36.141668   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:36.144536   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:36.642098   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:36.642119   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:36.642127   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:36.642130   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:36.645313   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:37.142420   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:37.142468   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:37.142477   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:37.142481   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:37.145780   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:37.641627   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:37.641647   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:37.641655   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:37.641659   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:37.644484   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:38.142220   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:38.142244   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:38.142255   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:38.142262   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:38.145466   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:38.146172   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:38.641992   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:38.642015   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:38.642024   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:38.642028   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:38.644515   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:39.141559   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:39.141585   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:39.141595   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:39.141601   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:39.145034   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:39.641804   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:39.641838   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:39.641845   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:39.641850   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:39.646296   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:40.142227   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:40.142248   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:40.142256   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:40.142260   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:40.145591   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:40.642234   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:40.642258   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:40.642267   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:40.642271   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:40.645384   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:40.646037   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:41.142410   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:41.142429   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:41.142437   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:41.142441   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:41.145729   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:41.642146   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:41.642167   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:41.642174   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:41.642178   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:41.645647   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:42.141537   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:42.141559   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:42.141569   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:42.141575   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:42.144817   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:42.642106   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:42.642127   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:42.642136   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:42.642141   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:42.645934   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:42.646419   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:43.141441   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:43.141464   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:43.141472   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:43.141476   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:43.144793   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:43.642316   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:43.642337   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:43.642345   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:43.642351   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:43.646007   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:44.142085   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:44.142106   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:44.142114   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:44.142117   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:44.145431   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:44.642346   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:44.642368   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:44.642376   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:44.642379   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:44.645860   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:45.142289   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:45.142312   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.142323   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.142330   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.145780   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:45.146379   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:45.641699   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:45.641725   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.641733   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.641736   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.645813   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:45.646591   31154 node_ready.go:49] node "ha-193737-m02" has status "Ready":"True"
	I1001 19:21:45.646618   31154 node_ready.go:38] duration metric: took 19.005351721s for node "ha-193737-m02" to be "Ready" ...
	I1001 19:21:45.646627   31154 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 19:21:45.646691   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:21:45.646700   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.646707   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.646713   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.650655   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:45.657881   31154 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.657971   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hd5hv
	I1001 19:21:45.657980   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.657988   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.657993   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.660900   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.661620   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:45.661639   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.661649   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.661657   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.665733   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:45.666386   31154 pod_ready.go:93] pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:45.666409   31154 pod_ready.go:82] duration metric: took 8.499445ms for pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.666421   31154 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.666492   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-v2wsx
	I1001 19:21:45.666502   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.666512   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.666518   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.669133   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.669889   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:45.669907   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.669918   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.669923   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.672275   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.672755   31154 pod_ready.go:93] pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:45.672774   31154 pod_ready.go:82] duration metric: took 6.344856ms for pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.672786   31154 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.672846   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737
	I1001 19:21:45.672857   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.672867   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.672872   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.675287   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.675893   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:45.675911   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.675922   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.675930   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.678241   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.678741   31154 pod_ready.go:93] pod "etcd-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:45.678763   31154 pod_ready.go:82] duration metric: took 5.967949ms for pod "etcd-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.678772   31154 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.678833   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737-m02
	I1001 19:21:45.678850   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.678858   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.678871   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.681191   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.681800   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:45.681815   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.681825   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.681830   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.683889   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.684431   31154 pod_ready.go:93] pod "etcd-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:45.684453   31154 pod_ready.go:82] duration metric: took 5.673081ms for pod "etcd-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.684473   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.841835   31154 request.go:632] Waited for 157.291258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737
	I1001 19:21:45.841900   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737
	I1001 19:21:45.841906   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.841913   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.841919   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.845357   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.042508   31154 request.go:632] Waited for 196.405333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:46.042588   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:46.042599   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:46.042611   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:46.042619   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:46.046254   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.046866   31154 pod_ready.go:93] pod "kube-apiserver-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:46.046884   31154 pod_ready.go:82] duration metric: took 362.399581ms for pod "kube-apiserver-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:46.046893   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:46.242039   31154 request.go:632] Waited for 195.063872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m02
	I1001 19:21:46.242144   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m02
	I1001 19:21:46.242157   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:46.242168   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:46.242174   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:46.246032   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.441916   31154 request.go:632] Waited for 195.330252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:46.441997   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:46.442003   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:46.442011   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:46.442014   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:46.445457   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.445994   31154 pod_ready.go:93] pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:46.446014   31154 pod_ready.go:82] duration metric: took 399.112887ms for pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:46.446031   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:46.642080   31154 request.go:632] Waited for 195.96912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737
	I1001 19:21:46.642133   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737
	I1001 19:21:46.642138   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:46.642146   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:46.642149   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:46.645872   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.842116   31154 request.go:632] Waited for 195.42226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:46.842206   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:46.842215   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:46.842223   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:46.842231   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:46.845287   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.845743   31154 pod_ready.go:93] pod "kube-controller-manager-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:46.845760   31154 pod_ready.go:82] duration metric: took 399.720077ms for pod "kube-controller-manager-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:46.845770   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:47.042048   31154 request.go:632] Waited for 196.194982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m02
	I1001 19:21:47.042116   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m02
	I1001 19:21:47.042122   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:47.042129   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:47.042134   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:47.045174   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:47.242154   31154 request.go:632] Waited for 196.389668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:47.242211   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:47.242216   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:47.242224   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:47.242228   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:47.246078   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:47.246437   31154 pod_ready.go:93] pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:47.246460   31154 pod_ready.go:82] duration metric: took 400.684034ms for pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:47.246470   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4294m" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:47.442023   31154 request.go:632] Waited for 195.496186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4294m
	I1001 19:21:47.442102   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4294m
	I1001 19:21:47.442107   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:47.442115   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:47.442119   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:47.446724   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:47.642099   31154 request.go:632] Waited for 194.348221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:47.642163   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:47.642174   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:47.642181   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:47.642186   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:47.645393   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:47.645928   31154 pod_ready.go:93] pod "kube-proxy-4294m" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:47.645950   31154 pod_ready.go:82] duration metric: took 399.472712ms for pod "kube-proxy-4294m" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:47.645961   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zpsll" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:47.842563   31154 request.go:632] Waited for 196.53672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zpsll
	I1001 19:21:47.842654   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zpsll
	I1001 19:21:47.842670   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:47.842677   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:47.842685   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:47.846435   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:48.042435   31154 request.go:632] Waited for 195.268783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:48.042516   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:48.042523   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.042531   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.042535   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.045444   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:48.045979   31154 pod_ready.go:93] pod "kube-proxy-zpsll" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:48.045999   31154 pod_ready.go:82] duration metric: took 400.030874ms for pod "kube-proxy-zpsll" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:48.046008   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:48.242127   31154 request.go:632] Waited for 196.061352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737
	I1001 19:21:48.242188   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737
	I1001 19:21:48.242194   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.242200   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.242205   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.245701   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:48.442714   31154 request.go:632] Waited for 196.392016ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:48.442788   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:48.442796   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.442806   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.442811   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.445488   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:48.445923   31154 pod_ready.go:93] pod "kube-scheduler-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:48.445941   31154 pod_ready.go:82] duration metric: took 399.927294ms for pod "kube-scheduler-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:48.445950   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:48.642436   31154 request.go:632] Waited for 196.414559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m02
	I1001 19:21:48.642504   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m02
	I1001 19:21:48.642511   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.642520   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.642528   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.645886   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:48.841792   31154 request.go:632] Waited for 195.303821ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:48.841877   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:48.841893   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.841907   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.841917   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.845141   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:48.845610   31154 pod_ready.go:93] pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:48.845627   31154 pod_ready.go:82] duration metric: took 399.670346ms for pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:48.845638   31154 pod_ready.go:39] duration metric: took 3.199000029s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 19:21:48.845650   31154 api_server.go:52] waiting for apiserver process to appear ...
	I1001 19:21:48.845706   31154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 19:21:48.860102   31154 api_server.go:72] duration metric: took 22.544907394s to wait for apiserver process to appear ...
	I1001 19:21:48.860136   31154 api_server.go:88] waiting for apiserver healthz status ...
	I1001 19:21:48.860157   31154 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I1001 19:21:48.864372   31154 api_server.go:279] https://192.168.39.14:8443/healthz returned 200:
	ok
	I1001 19:21:48.864454   31154 round_trippers.go:463] GET https://192.168.39.14:8443/version
	I1001 19:21:48.864464   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.864471   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.864475   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.865481   31154 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1001 19:21:48.865563   31154 api_server.go:141] control plane version: v1.31.1
	I1001 19:21:48.865578   31154 api_server.go:131] duration metric: took 5.43668ms to wait for apiserver health ...
	I1001 19:21:48.865588   31154 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 19:21:49.042005   31154 request.go:632] Waited for 176.346586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:21:49.042080   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:21:49.042086   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:49.042096   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:49.042103   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:49.046797   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:49.050697   31154 system_pods.go:59] 17 kube-system pods found
	I1001 19:21:49.050730   31154 system_pods.go:61] "coredns-7c65d6cfc9-hd5hv" [31f0afff-5571-46d6-888f-8982c71ba191] Running
	I1001 19:21:49.050741   31154 system_pods.go:61] "coredns-7c65d6cfc9-v2wsx" [8e3dd318-5017-4ada-bf2f-61b640ee2030] Running
	I1001 19:21:49.050745   31154 system_pods.go:61] "etcd-ha-193737" [99c3674d-50d6-4160-89dd-afb3c2e71039] Running
	I1001 19:21:49.050749   31154 system_pods.go:61] "etcd-ha-193737-m02" [a541b9c8-c10e-45b4-ac9c-d16e8bf659b0] Running
	I1001 19:21:49.050752   31154 system_pods.go:61] "kindnet-drdlr" [13177890-a0eb-47ff-8fe2-585992810e47] Running
	I1001 19:21:49.050755   31154 system_pods.go:61] "kindnet-wnr6g" [89e11419-0c5c-486e-bdbf-eaf6fab1e62c] Running
	I1001 19:21:49.050758   31154 system_pods.go:61] "kube-apiserver-ha-193737" [432bf6a6-4607-4f55-a026-d910075d3145] Running
	I1001 19:21:49.050761   31154 system_pods.go:61] "kube-apiserver-ha-193737-m02" [379a24af-2148-4870-9f4b-25ad48943445] Running
	I1001 19:21:49.050764   31154 system_pods.go:61] "kube-controller-manager-ha-193737" [95fb44db-8cb3-4db9-8a84-ab82e15760de] Running
	I1001 19:21:49.050768   31154 system_pods.go:61] "kube-controller-manager-ha-193737-m02" [5b0821e0-673a-440c-8ca1-fefbfe913b94] Running
	I1001 19:21:49.050771   31154 system_pods.go:61] "kube-proxy-4294m" [f454ca0f-d662-4dfd-ab77-ebccaf3a6b12] Running
	I1001 19:21:49.050773   31154 system_pods.go:61] "kube-proxy-zpsll" [c18fec3c-2880-4860-b220-a44d5e523bed] Running
	I1001 19:21:49.050777   31154 system_pods.go:61] "kube-scheduler-ha-193737" [46ca2b37-6145-4111-9088-bcf51307be3a] Running
	I1001 19:21:49.050780   31154 system_pods.go:61] "kube-scheduler-ha-193737-m02" [72be6cd5-d226-46d6-b675-99014e544dfb] Running
	I1001 19:21:49.050783   31154 system_pods.go:61] "kube-vip-ha-193737" [cbe8e6a4-08f3-4db3-af4d-810a5592597c] Running
	I1001 19:21:49.050790   31154 system_pods.go:61] "kube-vip-ha-193737-m02" [da7dfa59-1b27-45ce-9e61-ad8da40ec548] Running
	I1001 19:21:49.050793   31154 system_pods.go:61] "storage-provisioner" [d5b587a6-418b-47e5-9bf7-3fb6fa5e3372] Running
	I1001 19:21:49.050802   31154 system_pods.go:74] duration metric: took 185.209049ms to wait for pod list to return data ...
	I1001 19:21:49.050812   31154 default_sa.go:34] waiting for default service account to be created ...
	I1001 19:21:49.242249   31154 request.go:632] Waited for 191.355869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
	I1001 19:21:49.242329   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
	I1001 19:21:49.242336   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:49.242346   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:49.242365   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:49.246320   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:49.246557   31154 default_sa.go:45] found service account: "default"
	I1001 19:21:49.246575   31154 default_sa.go:55] duration metric: took 195.756912ms for default service account to be created ...
	I1001 19:21:49.246582   31154 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 19:21:49.442016   31154 request.go:632] Waited for 195.370336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:21:49.442076   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:21:49.442083   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:49.442092   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:49.442101   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:49.446494   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:49.452730   31154 system_pods.go:86] 17 kube-system pods found
	I1001 19:21:49.452758   31154 system_pods.go:89] "coredns-7c65d6cfc9-hd5hv" [31f0afff-5571-46d6-888f-8982c71ba191] Running
	I1001 19:21:49.452764   31154 system_pods.go:89] "coredns-7c65d6cfc9-v2wsx" [8e3dd318-5017-4ada-bf2f-61b640ee2030] Running
	I1001 19:21:49.452768   31154 system_pods.go:89] "etcd-ha-193737" [99c3674d-50d6-4160-89dd-afb3c2e71039] Running
	I1001 19:21:49.452772   31154 system_pods.go:89] "etcd-ha-193737-m02" [a541b9c8-c10e-45b4-ac9c-d16e8bf659b0] Running
	I1001 19:21:49.452775   31154 system_pods.go:89] "kindnet-drdlr" [13177890-a0eb-47ff-8fe2-585992810e47] Running
	I1001 19:21:49.452778   31154 system_pods.go:89] "kindnet-wnr6g" [89e11419-0c5c-486e-bdbf-eaf6fab1e62c] Running
	I1001 19:21:49.452781   31154 system_pods.go:89] "kube-apiserver-ha-193737" [432bf6a6-4607-4f55-a026-d910075d3145] Running
	I1001 19:21:49.452784   31154 system_pods.go:89] "kube-apiserver-ha-193737-m02" [379a24af-2148-4870-9f4b-25ad48943445] Running
	I1001 19:21:49.452788   31154 system_pods.go:89] "kube-controller-manager-ha-193737" [95fb44db-8cb3-4db9-8a84-ab82e15760de] Running
	I1001 19:21:49.452791   31154 system_pods.go:89] "kube-controller-manager-ha-193737-m02" [5b0821e0-673a-440c-8ca1-fefbfe913b94] Running
	I1001 19:21:49.452793   31154 system_pods.go:89] "kube-proxy-4294m" [f454ca0f-d662-4dfd-ab77-ebccaf3a6b12] Running
	I1001 19:21:49.452803   31154 system_pods.go:89] "kube-proxy-zpsll" [c18fec3c-2880-4860-b220-a44d5e523bed] Running
	I1001 19:21:49.452806   31154 system_pods.go:89] "kube-scheduler-ha-193737" [46ca2b37-6145-4111-9088-bcf51307be3a] Running
	I1001 19:21:49.452809   31154 system_pods.go:89] "kube-scheduler-ha-193737-m02" [72be6cd5-d226-46d6-b675-99014e544dfb] Running
	I1001 19:21:49.452812   31154 system_pods.go:89] "kube-vip-ha-193737" [cbe8e6a4-08f3-4db3-af4d-810a5592597c] Running
	I1001 19:21:49.452815   31154 system_pods.go:89] "kube-vip-ha-193737-m02" [da7dfa59-1b27-45ce-9e61-ad8da40ec548] Running
	I1001 19:21:49.452817   31154 system_pods.go:89] "storage-provisioner" [d5b587a6-418b-47e5-9bf7-3fb6fa5e3372] Running
	I1001 19:21:49.452823   31154 system_pods.go:126] duration metric: took 206.236353ms to wait for k8s-apps to be running ...
	I1001 19:21:49.452833   31154 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 19:21:49.452882   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 19:21:49.467775   31154 system_svc.go:56] duration metric: took 14.93254ms WaitForService to wait for kubelet
	I1001 19:21:49.467809   31154 kubeadm.go:582] duration metric: took 23.152617942s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 19:21:49.467833   31154 node_conditions.go:102] verifying NodePressure condition ...
	I1001 19:21:49.642303   31154 request.go:632] Waited for 174.372716ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes
	I1001 19:21:49.642352   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes
	I1001 19:21:49.642356   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:49.642364   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:49.642369   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:49.646440   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:49.647131   31154 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 19:21:49.647176   31154 node_conditions.go:123] node cpu capacity is 2
	I1001 19:21:49.647192   31154 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 19:21:49.647199   31154 node_conditions.go:123] node cpu capacity is 2
	I1001 19:21:49.647206   31154 node_conditions.go:105] duration metric: took 179.366973ms to run NodePressure ...
	I1001 19:21:49.647235   31154 start.go:241] waiting for startup goroutines ...
	I1001 19:21:49.647267   31154 start.go:255] writing updated cluster config ...
	I1001 19:21:49.649327   31154 out.go:201] 
	I1001 19:21:49.650621   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:21:49.650719   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:21:49.652065   31154 out.go:177] * Starting "ha-193737-m03" control-plane node in "ha-193737" cluster
	I1001 19:21:49.653048   31154 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:21:49.653076   31154 cache.go:56] Caching tarball of preloaded images
	I1001 19:21:49.653193   31154 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 19:21:49.653209   31154 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 19:21:49.653361   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:21:49.653640   31154 start.go:360] acquireMachinesLock for ha-193737-m03: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 19:21:49.653690   31154 start.go:364] duration metric: took 31.444µs to acquireMachinesLock for "ha-193737-m03"
	I1001 19:21:49.653709   31154 start.go:93] Provisioning new machine with config: &{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:21:49.653808   31154 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1001 19:21:49.655218   31154 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 19:21:49.655330   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:21:49.655375   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:21:49.671457   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36635
	I1001 19:21:49.672015   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:21:49.672579   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:21:49.672608   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:21:49.673005   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:21:49.673189   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetMachineName
	I1001 19:21:49.673372   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:21:49.673585   31154 start.go:159] libmachine.API.Create for "ha-193737" (driver="kvm2")
	I1001 19:21:49.673614   31154 client.go:168] LocalClient.Create starting
	I1001 19:21:49.673650   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem
	I1001 19:21:49.673691   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:21:49.673722   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:21:49.673797   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem
	I1001 19:21:49.673824   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:21:49.673838   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:21:49.673873   31154 main.go:141] libmachine: Running pre-create checks...
	I1001 19:21:49.673885   31154 main.go:141] libmachine: (ha-193737-m03) Calling .PreCreateCheck
	I1001 19:21:49.674030   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetConfigRaw
	I1001 19:21:49.674391   31154 main.go:141] libmachine: Creating machine...
	I1001 19:21:49.674405   31154 main.go:141] libmachine: (ha-193737-m03) Calling .Create
	I1001 19:21:49.674509   31154 main.go:141] libmachine: (ha-193737-m03) Creating KVM machine...
	I1001 19:21:49.675629   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found existing default KVM network
	I1001 19:21:49.675774   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found existing private KVM network mk-ha-193737
	I1001 19:21:49.675890   31154 main.go:141] libmachine: (ha-193737-m03) Setting up store path in /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03 ...
	I1001 19:21:49.675911   31154 main.go:141] libmachine: (ha-193737-m03) Building disk image from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 19:21:49.675957   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:49.675868   32386 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:21:49.676067   31154 main.go:141] libmachine: (ha-193737-m03) Downloading /home/jenkins/minikube-integration/19736-11198/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 19:21:49.919887   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:49.919775   32386 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa...
	I1001 19:21:50.197974   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:50.197797   32386 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/ha-193737-m03.rawdisk...
	I1001 19:21:50.198009   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Writing magic tar header
	I1001 19:21:50.198030   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Writing SSH key tar header
	I1001 19:21:50.198044   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03 (perms=drwx------)
	I1001 19:21:50.198058   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:50.197915   32386 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03 ...
	I1001 19:21:50.198069   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines (perms=drwxr-xr-x)
	I1001 19:21:50.198088   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube (perms=drwxr-xr-x)
	I1001 19:21:50.198099   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198 (perms=drwxrwxr-x)
	I1001 19:21:50.198109   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 19:21:50.198128   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 19:21:50.198141   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03
	I1001 19:21:50.198152   31154 main.go:141] libmachine: (ha-193737-m03) Creating domain...
	I1001 19:21:50.198180   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines
	I1001 19:21:50.198190   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:21:50.198206   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198
	I1001 19:21:50.198215   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 19:21:50.198224   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins
	I1001 19:21:50.198235   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home
	I1001 19:21:50.198248   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Skipping /home - not owner
	I1001 19:21:50.199136   31154 main.go:141] libmachine: (ha-193737-m03) define libvirt domain using xml: 
	I1001 19:21:50.199163   31154 main.go:141] libmachine: (ha-193737-m03) <domain type='kvm'>
	I1001 19:21:50.199174   31154 main.go:141] libmachine: (ha-193737-m03)   <name>ha-193737-m03</name>
	I1001 19:21:50.199182   31154 main.go:141] libmachine: (ha-193737-m03)   <memory unit='MiB'>2200</memory>
	I1001 19:21:50.199192   31154 main.go:141] libmachine: (ha-193737-m03)   <vcpu>2</vcpu>
	I1001 19:21:50.199198   31154 main.go:141] libmachine: (ha-193737-m03)   <features>
	I1001 19:21:50.199207   31154 main.go:141] libmachine: (ha-193737-m03)     <acpi/>
	I1001 19:21:50.199216   31154 main.go:141] libmachine: (ha-193737-m03)     <apic/>
	I1001 19:21:50.199226   31154 main.go:141] libmachine: (ha-193737-m03)     <pae/>
	I1001 19:21:50.199234   31154 main.go:141] libmachine: (ha-193737-m03)     
	I1001 19:21:50.199241   31154 main.go:141] libmachine: (ha-193737-m03)   </features>
	I1001 19:21:50.199248   31154 main.go:141] libmachine: (ha-193737-m03)   <cpu mode='host-passthrough'>
	I1001 19:21:50.199270   31154 main.go:141] libmachine: (ha-193737-m03)   
	I1001 19:21:50.199286   31154 main.go:141] libmachine: (ha-193737-m03)   </cpu>
	I1001 19:21:50.199295   31154 main.go:141] libmachine: (ha-193737-m03)   <os>
	I1001 19:21:50.199303   31154 main.go:141] libmachine: (ha-193737-m03)     <type>hvm</type>
	I1001 19:21:50.199315   31154 main.go:141] libmachine: (ha-193737-m03)     <boot dev='cdrom'/>
	I1001 19:21:50.199323   31154 main.go:141] libmachine: (ha-193737-m03)     <boot dev='hd'/>
	I1001 19:21:50.199334   31154 main.go:141] libmachine: (ha-193737-m03)     <bootmenu enable='no'/>
	I1001 19:21:50.199343   31154 main.go:141] libmachine: (ha-193737-m03)   </os>
	I1001 19:21:50.199352   31154 main.go:141] libmachine: (ha-193737-m03)   <devices>
	I1001 19:21:50.199367   31154 main.go:141] libmachine: (ha-193737-m03)     <disk type='file' device='cdrom'>
	I1001 19:21:50.199383   31154 main.go:141] libmachine: (ha-193737-m03)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/boot2docker.iso'/>
	I1001 19:21:50.199394   31154 main.go:141] libmachine: (ha-193737-m03)       <target dev='hdc' bus='scsi'/>
	I1001 19:21:50.199404   31154 main.go:141] libmachine: (ha-193737-m03)       <readonly/>
	I1001 19:21:50.199413   31154 main.go:141] libmachine: (ha-193737-m03)     </disk>
	I1001 19:21:50.199425   31154 main.go:141] libmachine: (ha-193737-m03)     <disk type='file' device='disk'>
	I1001 19:21:50.199441   31154 main.go:141] libmachine: (ha-193737-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 19:21:50.199458   31154 main.go:141] libmachine: (ha-193737-m03)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/ha-193737-m03.rawdisk'/>
	I1001 19:21:50.199468   31154 main.go:141] libmachine: (ha-193737-m03)       <target dev='hda' bus='virtio'/>
	I1001 19:21:50.199477   31154 main.go:141] libmachine: (ha-193737-m03)     </disk>
	I1001 19:21:50.199486   31154 main.go:141] libmachine: (ha-193737-m03)     <interface type='network'>
	I1001 19:21:50.199495   31154 main.go:141] libmachine: (ha-193737-m03)       <source network='mk-ha-193737'/>
	I1001 19:21:50.199503   31154 main.go:141] libmachine: (ha-193737-m03)       <model type='virtio'/>
	I1001 19:21:50.199531   31154 main.go:141] libmachine: (ha-193737-m03)     </interface>
	I1001 19:21:50.199562   31154 main.go:141] libmachine: (ha-193737-m03)     <interface type='network'>
	I1001 19:21:50.199576   31154 main.go:141] libmachine: (ha-193737-m03)       <source network='default'/>
	I1001 19:21:50.199588   31154 main.go:141] libmachine: (ha-193737-m03)       <model type='virtio'/>
	I1001 19:21:50.199599   31154 main.go:141] libmachine: (ha-193737-m03)     </interface>
	I1001 19:21:50.199608   31154 main.go:141] libmachine: (ha-193737-m03)     <serial type='pty'>
	I1001 19:21:50.199619   31154 main.go:141] libmachine: (ha-193737-m03)       <target port='0'/>
	I1001 19:21:50.199627   31154 main.go:141] libmachine: (ha-193737-m03)     </serial>
	I1001 19:21:50.199662   31154 main.go:141] libmachine: (ha-193737-m03)     <console type='pty'>
	I1001 19:21:50.199708   31154 main.go:141] libmachine: (ha-193737-m03)       <target type='serial' port='0'/>
	I1001 19:21:50.199726   31154 main.go:141] libmachine: (ha-193737-m03)     </console>
	I1001 19:21:50.199748   31154 main.go:141] libmachine: (ha-193737-m03)     <rng model='virtio'>
	I1001 19:21:50.199767   31154 main.go:141] libmachine: (ha-193737-m03)       <backend model='random'>/dev/random</backend>
	I1001 19:21:50.199780   31154 main.go:141] libmachine: (ha-193737-m03)     </rng>
	I1001 19:21:50.199794   31154 main.go:141] libmachine: (ha-193737-m03)     
	I1001 19:21:50.199803   31154 main.go:141] libmachine: (ha-193737-m03)     
	I1001 19:21:50.199814   31154 main.go:141] libmachine: (ha-193737-m03)   </devices>
	I1001 19:21:50.199820   31154 main.go:141] libmachine: (ha-193737-m03) </domain>
	I1001 19:21:50.199837   31154 main.go:141] libmachine: (ha-193737-m03) 
	I1001 19:21:50.206580   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:8b:a8:e7 in network default
	I1001 19:21:50.207376   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:50.207405   31154 main.go:141] libmachine: (ha-193737-m03) Ensuring networks are active...
	I1001 19:21:50.208168   31154 main.go:141] libmachine: (ha-193737-m03) Ensuring network default is active
	I1001 19:21:50.208498   31154 main.go:141] libmachine: (ha-193737-m03) Ensuring network mk-ha-193737 is active
	I1001 19:21:50.208873   31154 main.go:141] libmachine: (ha-193737-m03) Getting domain xml...
	I1001 19:21:50.209740   31154 main.go:141] libmachine: (ha-193737-m03) Creating domain...
	I1001 19:21:51.487699   31154 main.go:141] libmachine: (ha-193737-m03) Waiting to get IP...
	I1001 19:21:51.488558   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:51.488971   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:51.488988   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:51.488956   32386 retry.go:31] will retry after 292.057466ms: waiting for machine to come up
	I1001 19:21:51.782677   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:51.783145   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:51.783197   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:51.783106   32386 retry.go:31] will retry after 354.701551ms: waiting for machine to come up
	I1001 19:21:52.139803   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:52.140295   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:52.140322   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:52.140239   32386 retry.go:31] will retry after 363.996754ms: waiting for machine to come up
	I1001 19:21:52.505881   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:52.506427   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:52.506447   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:52.506386   32386 retry.go:31] will retry after 414.43192ms: waiting for machine to come up
	I1001 19:21:52.922204   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:52.922737   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:52.922766   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:52.922724   32386 retry.go:31] will retry after 579.407554ms: waiting for machine to come up
	I1001 19:21:53.503613   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:53.504058   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:53.504085   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:53.504000   32386 retry.go:31] will retry after 721.311664ms: waiting for machine to come up
	I1001 19:21:54.227110   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:54.227610   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:54.227655   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:54.227567   32386 retry.go:31] will retry after 1.130708111s: waiting for machine to come up
	I1001 19:21:55.360491   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:55.360900   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:55.360926   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:55.360870   32386 retry.go:31] will retry after 1.468803938s: waiting for machine to come up
	I1001 19:21:56.831225   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:56.831722   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:56.831750   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:56.831677   32386 retry.go:31] will retry after 1.742550848s: waiting for machine to come up
	I1001 19:21:58.576460   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:58.576859   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:58.576883   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:58.576823   32386 retry.go:31] will retry after 1.623668695s: waiting for machine to come up
	I1001 19:22:00.201759   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:00.202340   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:22:00.202361   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:22:00.202290   32386 retry.go:31] will retry after 1.997667198s: waiting for machine to come up
	I1001 19:22:02.201433   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:02.201901   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:22:02.201917   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:22:02.201868   32386 retry.go:31] will retry after 2.886327611s: waiting for machine to come up
	I1001 19:22:05.090402   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:05.090907   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:22:05.090933   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:22:05.090844   32386 retry.go:31] will retry after 3.87427099s: waiting for machine to come up
	I1001 19:22:08.966290   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:08.966719   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:22:08.966754   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:22:08.966674   32386 retry.go:31] will retry after 4.039315752s: waiting for machine to come up
	I1001 19:22:13.009358   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.009842   31154 main.go:141] libmachine: (ha-193737-m03) Found IP for machine: 192.168.39.101
	I1001 19:22:13.009868   31154 main.go:141] libmachine: (ha-193737-m03) Reserving static IP address...
	I1001 19:22:13.009881   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has current primary IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.010863   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find host DHCP lease matching {name: "ha-193737-m03", mac: "52:54:00:9e:b9:5c", ip: "192.168.39.101"} in network mk-ha-193737
	I1001 19:22:13.088968   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Getting to WaitForSSH function...
	I1001 19:22:13.088993   31154 main.go:141] libmachine: (ha-193737-m03) Reserved static IP address: 192.168.39.101
	I1001 19:22:13.089006   31154 main.go:141] libmachine: (ha-193737-m03) Waiting for SSH to be available...
	I1001 19:22:13.091870   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.092415   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.092449   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.092644   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Using SSH client type: external
	I1001 19:22:13.092667   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa (-rw-------)
	I1001 19:22:13.092694   31154 main.go:141] libmachine: (ha-193737-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 19:22:13.092712   31154 main.go:141] libmachine: (ha-193737-m03) DBG | About to run SSH command:
	I1001 19:22:13.092731   31154 main.go:141] libmachine: (ha-193737-m03) DBG | exit 0
	I1001 19:22:13.220534   31154 main.go:141] libmachine: (ha-193737-m03) DBG | SSH cmd err, output: <nil>: 
	I1001 19:22:13.220779   31154 main.go:141] libmachine: (ha-193737-m03) KVM machine creation complete!
	I1001 19:22:13.221074   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetConfigRaw
	I1001 19:22:13.221579   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:13.221804   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:13.221984   31154 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 19:22:13.222002   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetState
	I1001 19:22:13.223279   31154 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 19:22:13.223293   31154 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 19:22:13.223299   31154 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 19:22:13.223305   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.225923   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.226398   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.226416   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.226678   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:13.226887   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.227052   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.227186   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:13.227368   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:13.227559   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:13.227571   31154 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 19:22:13.332328   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:22:13.332352   31154 main.go:141] libmachine: Detecting the provisioner...
	I1001 19:22:13.332384   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.335169   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.335569   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.335603   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.335764   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:13.336042   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.336239   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.336386   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:13.336591   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:13.336771   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:13.336783   31154 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 19:22:13.445518   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 19:22:13.445586   31154 main.go:141] libmachine: found compatible host: buildroot
	I1001 19:22:13.445594   31154 main.go:141] libmachine: Provisioning with buildroot...
	I1001 19:22:13.445601   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetMachineName
	I1001 19:22:13.445821   31154 buildroot.go:166] provisioning hostname "ha-193737-m03"
	I1001 19:22:13.445847   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetMachineName
	I1001 19:22:13.446042   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.449433   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.449860   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.449897   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.450180   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:13.450368   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.450566   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.450713   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:13.450881   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:13.451039   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:13.451051   31154 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-193737-m03 && echo "ha-193737-m03" | sudo tee /etc/hostname
	I1001 19:22:13.572777   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-193737-m03
	
	I1001 19:22:13.572810   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.575494   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.575835   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.575859   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.576047   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:13.576235   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.576419   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.576571   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:13.576759   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:13.576956   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:13.576973   31154 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-193737-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-193737-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-193737-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 19:22:13.689983   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:22:13.690015   31154 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 19:22:13.690038   31154 buildroot.go:174] setting up certificates
	I1001 19:22:13.690050   31154 provision.go:84] configureAuth start
	I1001 19:22:13.690066   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetMachineName
	I1001 19:22:13.690369   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetIP
	I1001 19:22:13.693242   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.693664   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.693693   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.693840   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.696141   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.696495   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.696524   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.696638   31154 provision.go:143] copyHostCerts
	I1001 19:22:13.696676   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:22:13.696720   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 19:22:13.696731   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:22:13.696821   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 19:22:13.696919   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:22:13.696949   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 19:22:13.696960   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:22:13.697003   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 19:22:13.697067   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:22:13.697091   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 19:22:13.697100   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:22:13.697136   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 19:22:13.697206   31154 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.ha-193737-m03 san=[127.0.0.1 192.168.39.101 ha-193737-m03 localhost minikube]
	I1001 19:22:13.877573   31154 provision.go:177] copyRemoteCerts
	I1001 19:22:13.877625   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 19:22:13.877649   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.880678   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.880932   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.880970   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.881176   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:13.881406   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.881587   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:13.881804   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa Username:docker}
	I1001 19:22:13.962987   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 19:22:13.963068   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 19:22:13.986966   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 19:22:13.987070   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 19:22:14.013722   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 19:22:14.013794   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 19:22:14.037854   31154 provision.go:87] duration metric: took 347.788312ms to configureAuth
	I1001 19:22:14.037883   31154 buildroot.go:189] setting minikube options for container-runtime
	I1001 19:22:14.038135   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:22:14.038209   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:14.040944   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.041372   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.041401   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.041587   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:14.041771   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.041906   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.042003   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:14.042139   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:14.042328   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:14.042345   31154 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 19:22:14.262634   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 19:22:14.262673   31154 main.go:141] libmachine: Checking connection to Docker...
	I1001 19:22:14.262687   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetURL
	I1001 19:22:14.263998   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Using libvirt version 6000000
	I1001 19:22:14.266567   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.266926   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.266955   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.267154   31154 main.go:141] libmachine: Docker is up and running!
	I1001 19:22:14.267166   31154 main.go:141] libmachine: Reticulating splines...
	I1001 19:22:14.267173   31154 client.go:171] duration metric: took 24.593551771s to LocalClient.Create
	I1001 19:22:14.267196   31154 start.go:167] duration metric: took 24.593612564s to libmachine.API.Create "ha-193737"
	I1001 19:22:14.267205   31154 start.go:293] postStartSetup for "ha-193737-m03" (driver="kvm2")
	I1001 19:22:14.267214   31154 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 19:22:14.267240   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:14.267459   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 19:22:14.267484   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:14.269571   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.269977   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.270004   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.270121   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:14.270292   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.270427   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:14.270551   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa Username:docker}
	I1001 19:22:14.350988   31154 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 19:22:14.355823   31154 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 19:22:14.355848   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 19:22:14.355915   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 19:22:14.355986   31154 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 19:22:14.355994   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /etc/ssl/certs/184302.pem
	I1001 19:22:14.356070   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 19:22:14.366040   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:22:14.390055   31154 start.go:296] duration metric: took 122.835456ms for postStartSetup
	I1001 19:22:14.390108   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetConfigRaw
	I1001 19:22:14.390696   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetIP
	I1001 19:22:14.394065   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.394508   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.394536   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.394910   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:22:14.395150   31154 start.go:128] duration metric: took 24.741329773s to createHost
	I1001 19:22:14.395182   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:14.397581   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.397994   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.398017   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.398188   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:14.398403   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.398574   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.398727   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:14.398880   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:14.399094   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:14.399111   31154 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 19:22:14.505599   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727810534.482085733
	
	I1001 19:22:14.505628   31154 fix.go:216] guest clock: 1727810534.482085733
	I1001 19:22:14.505639   31154 fix.go:229] Guest: 2024-10-01 19:22:14.482085733 +0000 UTC Remote: 2024-10-01 19:22:14.395166889 +0000 UTC m=+146.623005707 (delta=86.918844ms)
	I1001 19:22:14.505658   31154 fix.go:200] guest clock delta is within tolerance: 86.918844ms
	I1001 19:22:14.505664   31154 start.go:83] releasing machines lock for "ha-193737-m03", held for 24.851963464s
	I1001 19:22:14.505684   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:14.505908   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetIP
	I1001 19:22:14.508696   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.509064   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.509086   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.511117   31154 out.go:177] * Found network options:
	I1001 19:22:14.512450   31154 out.go:177]   - NO_PROXY=192.168.39.14,192.168.39.27
	W1001 19:22:14.513603   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	W1001 19:22:14.513632   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 19:22:14.513653   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:14.514254   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:14.514460   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:14.514553   31154 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 19:22:14.514592   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	W1001 19:22:14.514627   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	W1001 19:22:14.514652   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 19:22:14.514726   31154 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 19:22:14.514748   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:14.517511   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.517716   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.517872   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.517897   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.518069   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:14.518071   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.518151   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.518298   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:14.518302   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.518474   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:14.518512   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.518613   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:14.518617   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa Username:docker}
	I1001 19:22:14.518740   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa Username:docker}
	I1001 19:22:14.749140   31154 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 19:22:14.755011   31154 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 19:22:14.755083   31154 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 19:22:14.772351   31154 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 19:22:14.772388   31154 start.go:495] detecting cgroup driver to use...
	I1001 19:22:14.772457   31154 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 19:22:14.789303   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 19:22:14.804840   31154 docker.go:217] disabling cri-docker service (if available) ...
	I1001 19:22:14.804906   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 19:22:14.819518   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 19:22:14.834095   31154 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 19:22:14.944783   31154 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 19:22:15.079717   31154 docker.go:233] disabling docker service ...
	I1001 19:22:15.079790   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 19:22:15.095162   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 19:22:15.107998   31154 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 19:22:15.243729   31154 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 19:22:15.377225   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 19:22:15.391343   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 19:22:15.411068   31154 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 19:22:15.411143   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.423227   31154 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 19:22:15.423294   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.434691   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.446242   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.457352   31154 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 19:22:15.469147   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.479924   31154 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.497221   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.507678   31154 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 19:22:15.517482   31154 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 19:22:15.517554   31154 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 19:22:15.532214   31154 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 19:22:15.541788   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:22:15.665094   31154 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 19:22:15.757492   31154 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 19:22:15.757569   31154 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 19:22:15.762004   31154 start.go:563] Will wait 60s for crictl version
	I1001 19:22:15.762063   31154 ssh_runner.go:195] Run: which crictl
	I1001 19:22:15.766039   31154 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 19:22:15.802516   31154 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 19:22:15.802600   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:22:15.831926   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:22:15.862187   31154 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 19:22:15.863552   31154 out.go:177]   - env NO_PROXY=192.168.39.14
	I1001 19:22:15.864903   31154 out.go:177]   - env NO_PROXY=192.168.39.14,192.168.39.27
	I1001 19:22:15.866357   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetIP
	I1001 19:22:15.868791   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:15.869113   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:15.869142   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:15.869293   31154 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 19:22:15.873237   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:22:15.885293   31154 mustload.go:65] Loading cluster: ha-193737
	I1001 19:22:15.885514   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:22:15.885795   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:22:15.885838   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:22:15.901055   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45767
	I1001 19:22:15.901633   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:22:15.902627   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:22:15.902658   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:22:15.903034   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:22:15.903198   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:22:15.905017   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:22:15.905429   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:22:15.905488   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:22:15.921741   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I1001 19:22:15.922203   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:22:15.923200   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:22:15.923220   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:22:15.923541   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:22:15.923744   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:22:15.923907   31154 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737 for IP: 192.168.39.101
	I1001 19:22:15.923919   31154 certs.go:194] generating shared ca certs ...
	I1001 19:22:15.923941   31154 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:22:15.924081   31154 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 19:22:15.924118   31154 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 19:22:15.924126   31154 certs.go:256] generating profile certs ...
	I1001 19:22:15.924217   31154 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key
	I1001 19:22:15.924242   31154 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.09da423f
	I1001 19:22:15.924256   31154 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.09da423f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.14 192.168.39.27 192.168.39.101 192.168.39.254]
	I1001 19:22:16.102464   31154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.09da423f ...
	I1001 19:22:16.102493   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.09da423f: {Name:mk41b913f57e7f10c713b2e18136c742f7b09ec4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:22:16.102655   31154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.09da423f ...
	I1001 19:22:16.102668   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.09da423f: {Name:mkaf44cea34e6bfbac4ea8c8d70ebec43d2a6d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:22:16.102739   31154 certs.go:381] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.09da423f -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt
	I1001 19:22:16.102870   31154 certs.go:385] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.09da423f -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key
	I1001 19:22:16.102988   31154 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key
	I1001 19:22:16.103003   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 19:22:16.103016   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 19:22:16.103030   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 19:22:16.103042   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 19:22:16.103054   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 19:22:16.103067   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 19:22:16.103081   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 19:22:16.120441   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 19:22:16.120535   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 19:22:16.120569   31154 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 19:22:16.120579   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 19:22:16.120602   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 19:22:16.120624   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 19:22:16.120682   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 19:22:16.120730   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:22:16.120759   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /usr/share/ca-certificates/184302.pem
	I1001 19:22:16.120772   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:22:16.120784   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem -> /usr/share/ca-certificates/18430.pem
	I1001 19:22:16.120814   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:22:16.123512   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:22:16.123983   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:22:16.124012   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:22:16.124198   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:22:16.124425   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:22:16.124611   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:22:16.124747   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:22:16.196684   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1001 19:22:16.201293   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1001 19:22:16.211163   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1001 19:22:16.215061   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1001 19:22:16.225018   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1001 19:22:16.228909   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1001 19:22:16.239430   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1001 19:22:16.243222   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1001 19:22:16.253163   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1001 19:22:16.256929   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1001 19:22:16.266378   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1001 19:22:16.270062   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1001 19:22:16.278964   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 19:22:16.303288   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 19:22:16.326243   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 19:22:16.347460   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 19:22:16.372037   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1001 19:22:16.396287   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 19:22:16.420724   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 19:22:16.445707   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 19:22:16.468539   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 19:22:16.492971   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 19:22:16.517838   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 19:22:16.541960   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1001 19:22:16.557831   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1001 19:22:16.573594   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1001 19:22:16.590168   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1001 19:22:16.607168   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1001 19:22:16.623957   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1001 19:22:16.640438   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1001 19:22:16.655967   31154 ssh_runner.go:195] Run: openssl version
	I1001 19:22:16.661524   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 19:22:16.672376   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 19:22:16.676864   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 19:22:16.676922   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 19:22:16.682647   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 19:22:16.693083   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 19:22:16.703938   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:22:16.708263   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:22:16.708320   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:22:16.714520   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 19:22:16.725249   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 19:22:16.736315   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 19:22:16.741061   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 19:22:16.741120   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 19:22:16.746697   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 19:22:16.757551   31154 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 19:22:16.761481   31154 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 19:22:16.761539   31154 kubeadm.go:934] updating node {m03 192.168.39.101 8443 v1.31.1 crio true true} ...
	I1001 19:22:16.761636   31154 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-193737-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 19:22:16.761666   31154 kube-vip.go:115] generating kube-vip config ...
	I1001 19:22:16.761704   31154 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 19:22:16.778682   31154 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 19:22:16.778755   31154 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1001 19:22:16.778825   31154 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 19:22:16.788174   31154 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1001 19:22:16.788258   31154 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1001 19:22:16.797330   31154 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1001 19:22:16.797360   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 19:22:16.797405   31154 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1001 19:22:16.797420   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 19:22:16.797425   31154 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1001 19:22:16.797452   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 19:22:16.797455   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 19:22:16.797515   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 19:22:16.806983   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1001 19:22:16.807016   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1001 19:22:16.807033   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1001 19:22:16.807064   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1001 19:22:16.822346   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 19:22:16.822450   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 19:22:16.908222   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1001 19:22:16.908266   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1001 19:22:17.718151   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1001 19:22:17.728679   31154 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1001 19:22:17.753493   31154 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 19:22:17.773315   31154 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1001 19:22:17.791404   31154 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 19:22:17.795599   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:22:17.808083   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:22:17.928195   31154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:22:17.944678   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:22:17.945052   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:22:17.945093   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:22:17.962020   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45629
	I1001 19:22:17.962474   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:22:17.962912   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:22:17.962940   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:22:17.963311   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:22:17.963520   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:22:17.963697   31154 start.go:317] joinCluster: &{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:22:17.963861   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1001 19:22:17.963886   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:22:17.967232   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:22:17.967827   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:22:17.967856   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:22:17.968135   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:22:17.968336   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:22:17.968495   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:22:17.968659   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:22:18.133596   31154 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:22:18.133651   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z7cdmg.hjk7kyt30ndw2tea --discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-193737-m03 --control-plane --apiserver-advertise-address=192.168.39.101 --apiserver-bind-port=8443"
	I1001 19:22:41.859086   31154 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z7cdmg.hjk7kyt30ndw2tea --discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-193737-m03 --control-plane --apiserver-advertise-address=192.168.39.101 --apiserver-bind-port=8443": (23.725407283s)
	I1001 19:22:41.859128   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1001 19:22:42.384071   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-193737-m03 minikube.k8s.io/updated_at=2024_10_01T19_22_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=ha-193737 minikube.k8s.io/primary=false
	I1001 19:22:42.510669   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-193737-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1001 19:22:42.641492   31154 start.go:319] duration metric: took 24.67779185s to joinCluster
	I1001 19:22:42.641581   31154 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:22:42.641937   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:22:42.642770   31154 out.go:177] * Verifying Kubernetes components...
	I1001 19:22:42.643798   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:22:42.883720   31154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:22:42.899372   31154 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:22:42.899626   31154 kapi.go:59] client config for ha-193737: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key", CAFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1001 19:22:42.899683   31154 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.14:8443
	I1001 19:22:42.899959   31154 node_ready.go:35] waiting up to 6m0s for node "ha-193737-m03" to be "Ready" ...
	I1001 19:22:42.900040   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:42.900052   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:42.900063   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:42.900071   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:42.904647   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:22:43.401126   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:43.401152   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:43.401163   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:43.401168   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:43.405027   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:43.900824   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:43.900848   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:43.900859   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:43.900868   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:43.904531   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:44.400251   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:44.400272   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:44.400281   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:44.400285   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:44.403517   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:44.901001   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:44.901028   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:44.901036   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:44.901041   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:44.905012   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:44.905575   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:45.400898   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:45.400924   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:45.400935   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:45.400942   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:45.405202   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:22:45.900749   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:45.900772   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:45.900781   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:45.900785   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:45.904505   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:46.400832   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:46.400855   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:46.400865   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:46.400871   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:46.404455   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:46.900834   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:46.900926   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:46.900945   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:46.900955   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:46.907848   31154 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1001 19:22:46.909060   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:47.400619   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:47.400639   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:47.400647   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:47.400651   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:47.404519   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:47.900808   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:47.900835   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:47.900846   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:47.900851   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:48.028121   31154 round_trippers.go:574] Response Status: 200 OK in 127 milliseconds
	I1001 19:22:48.400839   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:48.400859   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:48.400866   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:48.400870   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:48.404198   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:48.900508   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:48.900533   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:48.900544   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:48.900551   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:48.904379   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:49.400836   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:49.400857   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:49.400866   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:49.400870   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:49.403736   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:49.404256   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:49.901034   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:49.901058   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:49.901068   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:49.901073   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:49.905378   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:22:50.400178   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:50.400198   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:50.400206   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:50.400214   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:50.403269   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:50.901215   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:50.901242   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:50.901251   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:50.901256   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:50.905409   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:22:51.400867   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:51.400890   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:51.400899   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:51.400908   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:51.404516   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:51.404962   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:51.900265   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:51.900308   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:51.900315   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:51.900319   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:51.903634   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:52.401178   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:52.401200   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:52.401206   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:52.401211   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:52.404511   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:52.900412   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:52.900432   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:52.900441   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:52.900446   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:52.903570   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:53.400572   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:53.400602   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:53.400614   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:53.400622   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:53.403821   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:53.900178   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:53.900201   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:53.900210   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:53.900214   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:53.903933   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:53.904621   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:54.401040   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:54.401066   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:54.401078   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:54.401085   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:54.404732   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:54.901129   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:54.901154   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:54.901163   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:54.901166   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:54.904547   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:55.400669   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:55.400692   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:55.400700   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:55.400703   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:55.404556   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:55.900944   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:55.900966   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:55.900974   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:55.900977   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:55.904209   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:55.904851   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:56.400513   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:56.400537   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:56.400548   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:56.400554   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:56.403671   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:56.900541   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:56.900564   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:56.900575   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:56.900582   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:56.903726   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:57.400178   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:57.400200   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:57.400209   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:57.400216   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:57.403658   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:57.901131   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:57.901154   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:57.901163   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:57.901169   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:57.904387   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:58.401066   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:58.401087   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:58.401095   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:58.401098   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:58.404875   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:58.405329   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:58.900140   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:58.900160   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:58.900168   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:58.900172   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:58.903081   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.401118   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:59.401143   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.401153   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.401156   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.404480   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:59.405079   31154 node_ready.go:49] node "ha-193737-m03" has status "Ready":"True"
	I1001 19:22:59.405100   31154 node_ready.go:38] duration metric: took 16.505122802s for node "ha-193737-m03" to be "Ready" ...
	I1001 19:22:59.405110   31154 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 19:22:59.405190   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:22:59.405207   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.405217   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.405227   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.412572   31154 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1001 19:22:59.420220   31154 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.420321   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hd5hv
	I1001 19:22:59.420334   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.420345   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.420353   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.423179   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.423949   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:22:59.423964   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.423970   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.423975   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.426304   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.426762   31154 pod_ready.go:93] pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace has status "Ready":"True"
	I1001 19:22:59.426780   31154 pod_ready.go:82] duration metric: took 6.530664ms for pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.426796   31154 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.426857   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-v2wsx
	I1001 19:22:59.426866   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.426876   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.426887   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.429141   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.429823   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:22:59.429840   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.429848   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.429852   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.431860   31154 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 19:22:59.432333   31154 pod_ready.go:93] pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace has status "Ready":"True"
	I1001 19:22:59.432348   31154 pod_ready.go:82] duration metric: took 5.544704ms for pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.432374   31154 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.432437   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737
	I1001 19:22:59.432448   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.432456   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.432459   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.434479   31154 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 19:22:59.435042   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:22:59.435057   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.435063   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.435067   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.437217   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.437787   31154 pod_ready.go:93] pod "etcd-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:22:59.437803   31154 pod_ready.go:82] duration metric: took 5.420394ms for pod "etcd-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.437813   31154 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.437864   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737-m02
	I1001 19:22:59.437874   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.437883   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.437892   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.440631   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.441277   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:22:59.441295   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.441316   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.441325   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.448195   31154 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1001 19:22:59.448905   31154 pod_ready.go:93] pod "etcd-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:22:59.448925   31154 pod_ready.go:82] duration metric: took 11.104591ms for pod "etcd-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.448938   31154 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.601259   31154 request.go:632] Waited for 152.231969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737-m03
	I1001 19:22:59.601316   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737-m03
	I1001 19:22:59.601321   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.601329   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.601333   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.604878   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:59.801921   31154 request.go:632] Waited for 196.382761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:59.802008   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:59.802021   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.802031   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.802037   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.805203   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:59.806083   31154 pod_ready.go:93] pod "etcd-ha-193737-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 19:22:59.806103   31154 pod_ready.go:82] duration metric: took 357.156614ms for pod "etcd-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.806134   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:00.001202   31154 request.go:632] Waited for 194.974996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737
	I1001 19:23:00.001255   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737
	I1001 19:23:00.001260   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:00.001267   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:00.001271   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:00.005307   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:23:00.201989   31154 request.go:632] Waited for 195.321685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:00.202114   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:00.202132   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:00.202146   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:00.202158   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:00.205788   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:00.206508   31154 pod_ready.go:93] pod "kube-apiserver-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:00.206529   31154 pod_ready.go:82] duration metric: took 400.381151ms for pod "kube-apiserver-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:00.206541   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:00.401602   31154 request.go:632] Waited for 194.993098ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m02
	I1001 19:23:00.401663   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m02
	I1001 19:23:00.401668   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:00.401676   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:00.401680   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:00.405450   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:00.601599   31154 request.go:632] Waited for 195.316962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:00.601692   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:00.601700   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:00.601707   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:00.601711   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:00.605188   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:00.605660   31154 pod_ready.go:93] pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:00.605679   31154 pod_ready.go:82] duration metric: took 399.130829ms for pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:00.605688   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:00.801836   31154 request.go:632] Waited for 196.081559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m03
	I1001 19:23:00.801903   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m03
	I1001 19:23:00.801908   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:00.801926   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:00.801931   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:00.805500   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.001996   31154 request.go:632] Waited for 195.706291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:01.002060   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:01.002068   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:01.002082   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:01.002090   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:01.005674   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.006438   31154 pod_ready.go:93] pod "kube-apiserver-ha-193737-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:01.006466   31154 pod_ready.go:82] duration metric: took 400.769669ms for pod "kube-apiserver-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:01.006480   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:01.201564   31154 request.go:632] Waited for 195.007953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737
	I1001 19:23:01.201618   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737
	I1001 19:23:01.201623   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:01.201630   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:01.201634   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:01.204998   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.402159   31154 request.go:632] Waited for 196.410696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:01.402225   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:01.402232   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:01.402243   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:01.402250   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:01.405639   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.406259   31154 pod_ready.go:93] pod "kube-controller-manager-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:01.406284   31154 pod_ready.go:82] duration metric: took 399.796485ms for pod "kube-controller-manager-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:01.406298   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:01.601556   31154 request.go:632] Waited for 195.171182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m02
	I1001 19:23:01.601629   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m02
	I1001 19:23:01.601638   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:01.601646   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:01.601655   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:01.605271   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.801581   31154 request.go:632] Waited for 195.404456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:01.801644   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:01.801651   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:01.801662   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:01.801669   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:01.805042   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.805673   31154 pod_ready.go:93] pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:01.805694   31154 pod_ready.go:82] duration metric: took 399.387622ms for pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:01.805707   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:02.001904   31154 request.go:632] Waited for 195.994245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m03
	I1001 19:23:02.002040   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m03
	I1001 19:23:02.002064   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:02.002075   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:02.002080   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:02.005612   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:02.201553   31154 request.go:632] Waited for 195.185972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:02.201606   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:02.201612   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:02.201628   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:02.201645   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:02.205018   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:02.205533   31154 pod_ready.go:93] pod "kube-controller-manager-ha-193737-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:02.205552   31154 pod_ready.go:82] duration metric: took 399.838551ms for pod "kube-controller-manager-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:02.205563   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4294m" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:02.401983   31154 request.go:632] Waited for 196.357491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4294m
	I1001 19:23:02.402038   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4294m
	I1001 19:23:02.402043   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:02.402049   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:02.402054   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:02.405225   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:02.601208   31154 request.go:632] Waited for 195.289332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:02.601293   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:02.601304   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:02.601316   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:02.601328   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:02.604768   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:02.605212   31154 pod_ready.go:93] pod "kube-proxy-4294m" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:02.605230   31154 pod_ready.go:82] duration metric: took 399.66052ms for pod "kube-proxy-4294m" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:02.605242   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9pm4t" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:02.801359   31154 request.go:632] Waited for 196.035084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9pm4t
	I1001 19:23:02.801440   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9pm4t
	I1001 19:23:02.801448   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:02.801462   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:02.801473   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:02.804772   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:03.001444   31154 request.go:632] Waited for 196.042411ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:03.001517   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:03.001522   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:03.001536   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:03.001543   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:03.005199   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:03.005738   31154 pod_ready.go:93] pod "kube-proxy-9pm4t" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:03.005763   31154 pod_ready.go:82] duration metric: took 400.510951ms for pod "kube-proxy-9pm4t" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:03.005773   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zpsll" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:03.201543   31154 request.go:632] Waited for 195.704518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zpsll
	I1001 19:23:03.201618   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zpsll
	I1001 19:23:03.201627   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:03.201634   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:03.201639   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:03.204535   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:23:03.401528   31154 request.go:632] Waited for 196.292025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:03.401585   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:03.401590   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:03.401597   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:03.401602   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:03.405338   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:03.406008   31154 pod_ready.go:93] pod "kube-proxy-zpsll" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:03.406025   31154 pod_ready.go:82] duration metric: took 400.246215ms for pod "kube-proxy-zpsll" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:03.406035   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:03.601668   31154 request.go:632] Waited for 195.548834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737
	I1001 19:23:03.601752   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737
	I1001 19:23:03.601760   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:03.601772   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:03.601779   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:03.605345   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:03.801308   31154 request.go:632] Waited for 195.294104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:03.801403   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:03.801417   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:03.801427   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:03.801434   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:03.804468   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:03.805276   31154 pod_ready.go:93] pod "kube-scheduler-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:03.805293   31154 pod_ready.go:82] duration metric: took 399.251767ms for pod "kube-scheduler-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:03.805303   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:04.001445   31154 request.go:632] Waited for 196.067713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m02
	I1001 19:23:04.001522   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m02
	I1001 19:23:04.001531   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.001541   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.001548   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.004705   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:04.201792   31154 request.go:632] Waited for 196.362451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:04.201872   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:04.201879   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.201889   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.201897   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.205376   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:04.206212   31154 pod_ready.go:93] pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:04.206235   31154 pod_ready.go:82] duration metric: took 400.923668ms for pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:04.206250   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:04.401166   31154 request.go:632] Waited for 194.837724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m03
	I1001 19:23:04.401244   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m03
	I1001 19:23:04.401252   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.401266   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.401273   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.404292   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:23:04.601244   31154 request.go:632] Waited for 196.299344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:04.601300   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:04.601306   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.601313   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.601317   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.604470   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:04.605038   31154 pod_ready.go:93] pod "kube-scheduler-ha-193737-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:04.605055   31154 pod_ready.go:82] duration metric: took 398.796981ms for pod "kube-scheduler-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:04.605065   31154 pod_ready.go:39] duration metric: took 5.199943212s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 19:23:04.605079   31154 api_server.go:52] waiting for apiserver process to appear ...
	I1001 19:23:04.605144   31154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 19:23:04.623271   31154 api_server.go:72] duration metric: took 21.981652881s to wait for apiserver process to appear ...
	I1001 19:23:04.623293   31154 api_server.go:88] waiting for apiserver healthz status ...
	I1001 19:23:04.623314   31154 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I1001 19:23:04.631212   31154 api_server.go:279] https://192.168.39.14:8443/healthz returned 200:
	ok
	I1001 19:23:04.631285   31154 round_trippers.go:463] GET https://192.168.39.14:8443/version
	I1001 19:23:04.631295   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.631303   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.631310   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.632155   31154 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1001 19:23:04.632226   31154 api_server.go:141] control plane version: v1.31.1
	I1001 19:23:04.632243   31154 api_server.go:131] duration metric: took 8.942184ms to wait for apiserver health ...
	I1001 19:23:04.632254   31154 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 19:23:04.801981   31154 request.go:632] Waited for 169.64915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:23:04.802068   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:23:04.802079   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.802090   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.802102   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.809502   31154 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1001 19:23:04.815901   31154 system_pods.go:59] 24 kube-system pods found
	I1001 19:23:04.815930   31154 system_pods.go:61] "coredns-7c65d6cfc9-hd5hv" [31f0afff-5571-46d6-888f-8982c71ba191] Running
	I1001 19:23:04.815935   31154 system_pods.go:61] "coredns-7c65d6cfc9-v2wsx" [8e3dd318-5017-4ada-bf2f-61b640ee2030] Running
	I1001 19:23:04.815939   31154 system_pods.go:61] "etcd-ha-193737" [99c3674d-50d6-4160-89dd-afb3c2e71039] Running
	I1001 19:23:04.815943   31154 system_pods.go:61] "etcd-ha-193737-m02" [a541b9c8-c10e-45b4-ac9c-d16e8bf659b0] Running
	I1001 19:23:04.815946   31154 system_pods.go:61] "etcd-ha-193737-m03" [de61043b-ff4c-4d28-ab01-d63abf25ef30] Running
	I1001 19:23:04.815949   31154 system_pods.go:61] "kindnet-bqht8" [3cef1863-ae14-4ab4-bc4f-5545e058cc9c] Running
	I1001 19:23:04.815953   31154 system_pods.go:61] "kindnet-drdlr" [13177890-a0eb-47ff-8fe2-585992810e47] Running
	I1001 19:23:04.815955   31154 system_pods.go:61] "kindnet-wnr6g" [89e11419-0c5c-486e-bdbf-eaf6fab1e62c] Running
	I1001 19:23:04.815958   31154 system_pods.go:61] "kube-apiserver-ha-193737" [432bf6a6-4607-4f55-a026-d910075d3145] Running
	I1001 19:23:04.815961   31154 system_pods.go:61] "kube-apiserver-ha-193737-m02" [379a24af-2148-4870-9f4b-25ad48943445] Running
	I1001 19:23:04.815964   31154 system_pods.go:61] "kube-apiserver-ha-193737-m03" [fbf7fbec-142d-4402-9bcc-c3e25e11ac2e] Running
	I1001 19:23:04.815968   31154 system_pods.go:61] "kube-controller-manager-ha-193737" [95fb44db-8cb3-4db9-8a84-ab82e15760de] Running
	I1001 19:23:04.815971   31154 system_pods.go:61] "kube-controller-manager-ha-193737-m02" [5b0821e0-673a-440c-8ca1-fefbfe913b94] Running
	I1001 19:23:04.815974   31154 system_pods.go:61] "kube-controller-manager-ha-193737-m03" [fd854d14-6abb-42eb-b560-e816e86c6767] Running
	I1001 19:23:04.815981   31154 system_pods.go:61] "kube-proxy-4294m" [f454ca0f-d662-4dfd-ab77-ebccaf3a6b12] Running
	I1001 19:23:04.815987   31154 system_pods.go:61] "kube-proxy-9pm4t" [5dba191b-ba4a-4a22-80df-65afd1dcbfb5] Running
	I1001 19:23:04.815989   31154 system_pods.go:61] "kube-proxy-zpsll" [c18fec3c-2880-4860-b220-a44d5e523bed] Running
	I1001 19:23:04.815998   31154 system_pods.go:61] "kube-scheduler-ha-193737" [46ca2b37-6145-4111-9088-bcf51307be3a] Running
	I1001 19:23:04.816002   31154 system_pods.go:61] "kube-scheduler-ha-193737-m02" [72be6cd5-d226-46d6-b675-99014e544dfb] Running
	I1001 19:23:04.816005   31154 system_pods.go:61] "kube-scheduler-ha-193737-m03" [129167e7-febe-4de3-a35f-3f0e668c7a77] Running
	I1001 19:23:04.816008   31154 system_pods.go:61] "kube-vip-ha-193737" [cbe8e6a4-08f3-4db3-af4d-810a5592597c] Running
	I1001 19:23:04.816014   31154 system_pods.go:61] "kube-vip-ha-193737-m02" [da7dfa59-1b27-45ce-9e61-ad8da40ec548] Running
	I1001 19:23:04.816017   31154 system_pods.go:61] "kube-vip-ha-193737-m03" [7a9bbd2f-8b9a-4104-baf4-11efdd662028] Running
	I1001 19:23:04.816022   31154 system_pods.go:61] "storage-provisioner" [d5b587a6-418b-47e5-9bf7-3fb6fa5e3372] Running
	I1001 19:23:04.816027   31154 system_pods.go:74] duration metric: took 183.765578ms to wait for pod list to return data ...
	I1001 19:23:04.816036   31154 default_sa.go:34] waiting for default service account to be created ...
	I1001 19:23:05.001464   31154 request.go:632] Waited for 185.352635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
	I1001 19:23:05.001522   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
	I1001 19:23:05.001527   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:05.001534   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:05.001538   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:05.005437   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:05.005559   31154 default_sa.go:45] found service account: "default"
	I1001 19:23:05.005576   31154 default_sa.go:55] duration metric: took 189.530453ms for default service account to be created ...
	I1001 19:23:05.005589   31154 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 19:23:05.201939   31154 request.go:632] Waited for 196.276664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:23:05.201999   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:23:05.202009   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:05.202018   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:05.202026   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:05.208844   31154 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1001 19:23:05.215522   31154 system_pods.go:86] 24 kube-system pods found
	I1001 19:23:05.215551   31154 system_pods.go:89] "coredns-7c65d6cfc9-hd5hv" [31f0afff-5571-46d6-888f-8982c71ba191] Running
	I1001 19:23:05.215559   31154 system_pods.go:89] "coredns-7c65d6cfc9-v2wsx" [8e3dd318-5017-4ada-bf2f-61b640ee2030] Running
	I1001 19:23:05.215563   31154 system_pods.go:89] "etcd-ha-193737" [99c3674d-50d6-4160-89dd-afb3c2e71039] Running
	I1001 19:23:05.215567   31154 system_pods.go:89] "etcd-ha-193737-m02" [a541b9c8-c10e-45b4-ac9c-d16e8bf659b0] Running
	I1001 19:23:05.215570   31154 system_pods.go:89] "etcd-ha-193737-m03" [de61043b-ff4c-4d28-ab01-d63abf25ef30] Running
	I1001 19:23:05.215574   31154 system_pods.go:89] "kindnet-bqht8" [3cef1863-ae14-4ab4-bc4f-5545e058cc9c] Running
	I1001 19:23:05.215578   31154 system_pods.go:89] "kindnet-drdlr" [13177890-a0eb-47ff-8fe2-585992810e47] Running
	I1001 19:23:05.215581   31154 system_pods.go:89] "kindnet-wnr6g" [89e11419-0c5c-486e-bdbf-eaf6fab1e62c] Running
	I1001 19:23:05.215584   31154 system_pods.go:89] "kube-apiserver-ha-193737" [432bf6a6-4607-4f55-a026-d910075d3145] Running
	I1001 19:23:05.215588   31154 system_pods.go:89] "kube-apiserver-ha-193737-m02" [379a24af-2148-4870-9f4b-25ad48943445] Running
	I1001 19:23:05.215591   31154 system_pods.go:89] "kube-apiserver-ha-193737-m03" [fbf7fbec-142d-4402-9bcc-c3e25e11ac2e] Running
	I1001 19:23:05.215595   31154 system_pods.go:89] "kube-controller-manager-ha-193737" [95fb44db-8cb3-4db9-8a84-ab82e15760de] Running
	I1001 19:23:05.215598   31154 system_pods.go:89] "kube-controller-manager-ha-193737-m02" [5b0821e0-673a-440c-8ca1-fefbfe913b94] Running
	I1001 19:23:05.215601   31154 system_pods.go:89] "kube-controller-manager-ha-193737-m03" [fd854d14-6abb-42eb-b560-e816e86c6767] Running
	I1001 19:23:05.215603   31154 system_pods.go:89] "kube-proxy-4294m" [f454ca0f-d662-4dfd-ab77-ebccaf3a6b12] Running
	I1001 19:23:05.215606   31154 system_pods.go:89] "kube-proxy-9pm4t" [5dba191b-ba4a-4a22-80df-65afd1dcbfb5] Running
	I1001 19:23:05.215609   31154 system_pods.go:89] "kube-proxy-zpsll" [c18fec3c-2880-4860-b220-a44d5e523bed] Running
	I1001 19:23:05.215613   31154 system_pods.go:89] "kube-scheduler-ha-193737" [46ca2b37-6145-4111-9088-bcf51307be3a] Running
	I1001 19:23:05.215616   31154 system_pods.go:89] "kube-scheduler-ha-193737-m02" [72be6cd5-d226-46d6-b675-99014e544dfb] Running
	I1001 19:23:05.215621   31154 system_pods.go:89] "kube-scheduler-ha-193737-m03" [129167e7-febe-4de3-a35f-3f0e668c7a77] Running
	I1001 19:23:05.215626   31154 system_pods.go:89] "kube-vip-ha-193737" [cbe8e6a4-08f3-4db3-af4d-810a5592597c] Running
	I1001 19:23:05.215630   31154 system_pods.go:89] "kube-vip-ha-193737-m02" [da7dfa59-1b27-45ce-9e61-ad8da40ec548] Running
	I1001 19:23:05.215634   31154 system_pods.go:89] "kube-vip-ha-193737-m03" [7a9bbd2f-8b9a-4104-baf4-11efdd662028] Running
	I1001 19:23:05.215639   31154 system_pods.go:89] "storage-provisioner" [d5b587a6-418b-47e5-9bf7-3fb6fa5e3372] Running
	I1001 19:23:05.215647   31154 system_pods.go:126] duration metric: took 210.049347ms to wait for k8s-apps to be running ...
	I1001 19:23:05.215659   31154 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 19:23:05.215714   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 19:23:05.232730   31154 system_svc.go:56] duration metric: took 17.059785ms WaitForService to wait for kubelet
	I1001 19:23:05.232757   31154 kubeadm.go:582] duration metric: took 22.59114375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 19:23:05.232773   31154 node_conditions.go:102] verifying NodePressure condition ...
	I1001 19:23:05.401103   31154 request.go:632] Waited for 168.256226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes
	I1001 19:23:05.401154   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes
	I1001 19:23:05.401159   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:05.401165   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:05.401169   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:05.405382   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:23:05.406740   31154 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 19:23:05.406763   31154 node_conditions.go:123] node cpu capacity is 2
	I1001 19:23:05.406777   31154 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 19:23:05.406783   31154 node_conditions.go:123] node cpu capacity is 2
	I1001 19:23:05.406789   31154 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 19:23:05.406794   31154 node_conditions.go:123] node cpu capacity is 2
	I1001 19:23:05.406799   31154 node_conditions.go:105] duration metric: took 174.020761ms to run NodePressure ...
	I1001 19:23:05.406816   31154 start.go:241] waiting for startup goroutines ...
	I1001 19:23:05.406842   31154 start.go:255] writing updated cluster config ...
	I1001 19:23:05.407176   31154 ssh_runner.go:195] Run: rm -f paused
	I1001 19:23:05.459358   31154 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 19:23:05.461856   31154 out.go:177] * Done! kubectl is now configured to use "ha-193737" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.156230059Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810819156209548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3cf3853-8884-4648-b41b-ad64a8e521b7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.156749748Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd6a3459-328a-44bf-8051-fc65a6e7afb5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.156810113Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd6a3459-328a-44bf-8051-fc65a6e7afb5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.157038365Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727810590371295276,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75485355206ed8610939d128b62d0d55ec84cbb8615f7261469f854e52b6447d,PodSandboxId:7ea8efe8e5b7908a13d9ad3be6c8e7a9e871b332f5889126ea13429055eaaed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727810449416160580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449360614251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449354501899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-50
17-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278104
37213853474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727810437061931545,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c962c4138a0019c76f981b72e8948efd386403002bb686f7e70c38bc20c3d542,PodSandboxId:cb787d15fa3b89fa5c4a479def2aed57fb03a1d1abdf07f6170488ae8f5b774f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727810427447788769,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cf6ac3eb69fe181eb29ee323afb176,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727810424745051374,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727810424759083818,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c57920320eb65f4477cbff8aec818a7ecddb3461a77ca111172e4d1a5f7e71,PodSandboxId:f74fa319889b0624f5157b5a226af0e5a24f37f6922332d6e0037af95aa50aef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727810424668320387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9d05172b801a89ec36c0d0d549bfeee1aafb67f6586e3f4f163cb63f01d062,PodSandboxId:d6e9deea0a8069c598634405f46a9fa565d1d0b888716c0c94e4ed5b44588a10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727810424633610117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd6a3459-328a-44bf-8051-fc65a6e7afb5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.205268754Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=52b3980a-9616-4080-aef6-e216a02e206f name=/runtime.v1.RuntimeService/Version
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.205392699Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=52b3980a-9616-4080-aef6-e216a02e206f name=/runtime.v1.RuntimeService/Version
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.207051996Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a5c0c7f2-569f-449f-b155-0e32e6b827aa name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.207691480Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810819207658916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5c0c7f2-569f-449f-b155-0e32e6b827aa name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.208343264Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c58a5bb9-b2a8-4d96-b213-6ff6ba11b0fa name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.208415180Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c58a5bb9-b2a8-4d96-b213-6ff6ba11b0fa name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.208798567Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727810590371295276,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75485355206ed8610939d128b62d0d55ec84cbb8615f7261469f854e52b6447d,PodSandboxId:7ea8efe8e5b7908a13d9ad3be6c8e7a9e871b332f5889126ea13429055eaaed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727810449416160580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449360614251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449354501899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-50
17-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278104
37213853474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727810437061931545,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c962c4138a0019c76f981b72e8948efd386403002bb686f7e70c38bc20c3d542,PodSandboxId:cb787d15fa3b89fa5c4a479def2aed57fb03a1d1abdf07f6170488ae8f5b774f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727810427447788769,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cf6ac3eb69fe181eb29ee323afb176,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727810424745051374,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727810424759083818,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c57920320eb65f4477cbff8aec818a7ecddb3461a77ca111172e4d1a5f7e71,PodSandboxId:f74fa319889b0624f5157b5a226af0e5a24f37f6922332d6e0037af95aa50aef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727810424668320387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9d05172b801a89ec36c0d0d549bfeee1aafb67f6586e3f4f163cb63f01d062,PodSandboxId:d6e9deea0a8069c598634405f46a9fa565d1d0b888716c0c94e4ed5b44588a10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727810424633610117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c58a5bb9-b2a8-4d96-b213-6ff6ba11b0fa name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.249596808Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36b426c0-2d5d-4dcb-b53a-49c8b7016851 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.249769147Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36b426c0-2d5d-4dcb-b53a-49c8b7016851 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.251129519Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ecf920f9-304e-41de-9379-f0f657d12ecc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.251665765Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810819251638636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ecf920f9-304e-41de-9379-f0f657d12ecc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.252372353Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b7f0c96-da14-43b9-b64d-8c785a15074d name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.252456403Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b7f0c96-da14-43b9-b64d-8c785a15074d name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.252872993Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727810590371295276,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75485355206ed8610939d128b62d0d55ec84cbb8615f7261469f854e52b6447d,PodSandboxId:7ea8efe8e5b7908a13d9ad3be6c8e7a9e871b332f5889126ea13429055eaaed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727810449416160580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449360614251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449354501899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-50
17-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278104
37213853474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727810437061931545,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c962c4138a0019c76f981b72e8948efd386403002bb686f7e70c38bc20c3d542,PodSandboxId:cb787d15fa3b89fa5c4a479def2aed57fb03a1d1abdf07f6170488ae8f5b774f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727810427447788769,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cf6ac3eb69fe181eb29ee323afb176,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727810424745051374,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727810424759083818,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c57920320eb65f4477cbff8aec818a7ecddb3461a77ca111172e4d1a5f7e71,PodSandboxId:f74fa319889b0624f5157b5a226af0e5a24f37f6922332d6e0037af95aa50aef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727810424668320387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9d05172b801a89ec36c0d0d549bfeee1aafb67f6586e3f4f163cb63f01d062,PodSandboxId:d6e9deea0a8069c598634405f46a9fa565d1d0b888716c0c94e4ed5b44588a10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727810424633610117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b7f0c96-da14-43b9-b64d-8c785a15074d name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.295646491Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cb98e6cb-77a9-4d10-bcb0-31ee6c7373fc name=/runtime.v1.RuntimeService/Version
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.295803016Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cb98e6cb-77a9-4d10-bcb0-31ee6c7373fc name=/runtime.v1.RuntimeService/Version
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.297207936Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=34239d81-15fe-4a9f-b3c9-8d75c2d45d53 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.297845615Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810819297813482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34239d81-15fe-4a9f-b3c9-8d75c2d45d53 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.298508927Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19b2eaf3-7e32-4a38-8b00-f0e1256b222e name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.298595880Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19b2eaf3-7e32-4a38-8b00-f0e1256b222e name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:26:59 ha-193737 crio[661]: time="2024-10-01 19:26:59.298966776Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727810590371295276,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75485355206ed8610939d128b62d0d55ec84cbb8615f7261469f854e52b6447d,PodSandboxId:7ea8efe8e5b7908a13d9ad3be6c8e7a9e871b332f5889126ea13429055eaaed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727810449416160580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449360614251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449354501899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-50
17-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278104
37213853474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727810437061931545,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c962c4138a0019c76f981b72e8948efd386403002bb686f7e70c38bc20c3d542,PodSandboxId:cb787d15fa3b89fa5c4a479def2aed57fb03a1d1abdf07f6170488ae8f5b774f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727810427447788769,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cf6ac3eb69fe181eb29ee323afb176,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727810424745051374,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727810424759083818,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c57920320eb65f4477cbff8aec818a7ecddb3461a77ca111172e4d1a5f7e71,PodSandboxId:f74fa319889b0624f5157b5a226af0e5a24f37f6922332d6e0037af95aa50aef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727810424668320387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9d05172b801a89ec36c0d0d549bfeee1aafb67f6586e3f4f163cb63f01d062,PodSandboxId:d6e9deea0a8069c598634405f46a9fa565d1d0b888716c0c94e4ed5b44588a10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727810424633610117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19b2eaf3-7e32-4a38-8b00-f0e1256b222e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d523f1298c385       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   8ddf36dc2effd       busybox-7dff88458-rbjkx
	75485355206ed       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   7ea8efe8e5b79       storage-provisioner
	b9a32cfd9baec       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   b4ab4980fd9c6       coredns-7c65d6cfc9-hd5hv
	c598f8345f1d8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   69e4ceb6e3399       coredns-7c65d6cfc9-v2wsx
	25b91984e532b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   f7fcfb918d1fd       kindnet-wnr6g
	6ce5a1ca06729       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   65474abfbeabf       kube-proxy-zpsll
	c962c4138a001       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   cb787d15fa3b8       kube-vip-ha-193737
	7092a3841df08       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   c74bc4df7851a       etcd-ha-193737
	d7d722793679c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   4873897c8ffd7       kube-scheduler-ha-193737
	d2c57920320eb       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   f74fa319889b0       kube-apiserver-ha-193737
	fc9d05172b801       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   d6e9deea0a806       kube-controller-manager-ha-193737
	
	
	==> coredns [b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3] <==
	[INFO] 10.244.1.2:43526 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.003536908s
	[INFO] 10.244.1.2:59594 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.012224538s
	[INFO] 10.244.2.2:37785 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000112105s
	[INFO] 10.244.0.4:34398 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000118394s
	[INFO] 10.244.0.4:35218 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001965777s
	[INFO] 10.244.1.2:56827 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00018086s
	[INFO] 10.244.1.2:50439 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003922693s
	[INFO] 10.244.2.2:33611 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123417s
	[INFO] 10.244.2.2:37877 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204398s
	[INFO] 10.244.2.2:42894 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164711s
	[INFO] 10.244.0.4:58512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012749s
	[INFO] 10.244.0.4:60496 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126088s
	[INFO] 10.244.0.4:42876 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054151s
	[INFO] 10.244.0.4:46048 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001023388s
	[INFO] 10.244.0.4:45307 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069619s
	[INFO] 10.244.0.4:54830 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086737s
	[INFO] 10.244.1.2:56566 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104818s
	[INFO] 10.244.2.2:44960 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017462s
	[INFO] 10.244.2.2:35520 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000147677s
	[INFO] 10.244.0.4:34887 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000089068s
	[INFO] 10.244.0.4:47038 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093137s
	[INFO] 10.244.1.2:44935 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000181924s
	[INFO] 10.244.2.2:51593 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000184246s
	[INFO] 10.244.2.2:37070 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101666s
	[INFO] 10.244.0.4:49420 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115127s
	
	
	==> coredns [c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a] <==
	[INFO] 10.244.1.2:42880 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000139838s
	[INFO] 10.244.1.2:41832 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000162686s
	[INFO] 10.244.1.2:46697 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110911s
	[INFO] 10.244.2.2:37495 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001830157s
	[INFO] 10.244.2.2:39183 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155283s
	[INFO] 10.244.2.2:47614 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170182s
	[INFO] 10.244.2.2:52937 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001095974s
	[INFO] 10.244.2.2:59751 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106474s
	[INFO] 10.244.0.4:55786 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001514187s
	[INFO] 10.244.0.4:56387 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000050769s
	[INFO] 10.244.1.2:54787 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013733s
	[INFO] 10.244.1.2:58281 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113165s
	[INFO] 10.244.1.2:48712 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097722s
	[INFO] 10.244.2.2:57237 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152523s
	[INFO] 10.244.2.2:47314 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106445s
	[INFO] 10.244.0.4:43887 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000199016s
	[INFO] 10.244.0.4:49901 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000240769s
	[INFO] 10.244.1.2:54100 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210259s
	[INFO] 10.244.1.2:60342 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000221646s
	[INFO] 10.244.1.2:33783 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000165277s
	[INFO] 10.244.2.2:45378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197846s
	[INFO] 10.244.2.2:33324 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101556s
	[INFO] 10.244.0.4:40016 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000071122s
	[INFO] 10.244.0.4:40114 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135338s
	[INFO] 10.244.0.4:53904 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00006854s
	
	
	==> describe nodes <==
	Name:               ha-193737
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-193737
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=ha-193737
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T19_20_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:20:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-193737
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:26:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:23:34 +0000   Tue, 01 Oct 2024 19:20:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:23:34 +0000   Tue, 01 Oct 2024 19:20:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:23:34 +0000   Tue, 01 Oct 2024 19:20:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:23:34 +0000   Tue, 01 Oct 2024 19:20:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    ha-193737
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 008c1ccd624b4ab3b90055ff9f65b018
	  System UUID:                008c1ccd-624b-4ab3-b900-55ff9f65b018
	  Boot ID:                    ad12c9f1-7a18-4d35-9ec9-00d91da3365b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rbjkx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 coredns-7c65d6cfc9-hd5hv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m23s
	  kube-system                 coredns-7c65d6cfc9-v2wsx             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m23s
	  kube-system                 etcd-ha-193737                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m28s
	  kube-system                 kindnet-wnr6g                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m24s
	  kube-system                 kube-apiserver-ha-193737             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-controller-manager-ha-193737    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-proxy-zpsll                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 kube-scheduler-ha-193737             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-vip-ha-193737                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m22s                  kube-proxy       
	  Normal  Starting                 6m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     6m35s (x7 over 6m36s)  kubelet          Node ha-193737 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m35s (x8 over 6m36s)  kubelet          Node ha-193737 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s (x8 over 6m36s)  kubelet          Node ha-193737 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  6m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m29s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m28s                  kubelet          Node ha-193737 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m28s                  kubelet          Node ha-193737 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m28s                  kubelet          Node ha-193737 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m24s                  node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	  Normal  NodeReady                6m11s                  kubelet          Node ha-193737 status is now: NodeReady
	  Normal  RegisteredNode           5m28s                  node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	
	
	Name:               ha-193737-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-193737-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=ha-193737
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T19_21_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:21:23 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-193737-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:24:17 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 01 Oct 2024 19:23:25 +0000   Tue, 01 Oct 2024 19:25:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 01 Oct 2024 19:23:25 +0000   Tue, 01 Oct 2024 19:25:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 01 Oct 2024 19:23:25 +0000   Tue, 01 Oct 2024 19:25:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 01 Oct 2024 19:23:25 +0000   Tue, 01 Oct 2024 19:25:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-193737-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e20c76476d7c4acaa5fd75e5b8fa3bab
	  System UUID:                e20c7647-6d7c-4aca-a5fd-75e5b8fa3bab
	  Boot ID:                    6ae84c19-5df4-457f-b75c-eae86d5e0ee1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fz5bb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 etcd-ha-193737-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m34s
	  kube-system                 kindnet-drdlr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m36s
	  kube-system                 kube-apiserver-ha-193737-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-controller-manager-ha-193737-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-proxy-4294m                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-scheduler-ha-193737-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-vip-ha-193737-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m31s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m36s (x8 over 5m36s)  kubelet          Node ha-193737-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m36s (x8 over 5m36s)  kubelet          Node ha-193737-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m36s (x7 over 5m36s)  kubelet          Node ha-193737-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m34s                  node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	  Normal  RegisteredNode           5m28s                  node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	  Normal  NodeNotReady             119s                   node-controller  Node ha-193737-m02 status is now: NodeNotReady
	
	
	Name:               ha-193737-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-193737-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=ha-193737
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T19_22_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:22:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-193737-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:26:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:23:39 +0000   Tue, 01 Oct 2024 19:22:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:23:39 +0000   Tue, 01 Oct 2024 19:22:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:23:39 +0000   Tue, 01 Oct 2024 19:22:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:23:39 +0000   Tue, 01 Oct 2024 19:22:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-193737-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f175e16bf19e4217880e926a75ac0065
	  System UUID:                f175e16b-f19e-4217-880e-926a75ac0065
	  Boot ID:                    5dc1c664-a01d-46eb-a066-a1970597b392
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qzzzv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 etcd-ha-193737-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m19s
	  kube-system                 kindnet-bqht8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m21s
	  kube-system                 kube-apiserver-ha-193737-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-ha-193737-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-proxy-9pm4t                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-scheduler-ha-193737-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-vip-ha-193737-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m21s (x8 over 4m21s)  kubelet          Node ha-193737-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s (x8 over 4m21s)  kubelet          Node ha-193737-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s (x7 over 4m21s)  kubelet          Node ha-193737-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-193737-m03 event: Registered Node ha-193737-m03 in Controller
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-193737-m03 event: Registered Node ha-193737-m03 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-193737-m03 event: Registered Node ha-193737-m03 in Controller
	
	
	Name:               ha-193737-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-193737-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=ha-193737
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T19_23_47_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:23:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-193737-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:26:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:24:17 +0000   Tue, 01 Oct 2024 19:23:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:24:17 +0000   Tue, 01 Oct 2024 19:23:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:24:17 +0000   Tue, 01 Oct 2024 19:23:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:24:17 +0000   Tue, 01 Oct 2024 19:24:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.152
	  Hostname:    ha-193737-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef1097b5e0604ff19d7361f2921010b9
	  System UUID:                ef1097b5-e060-4ff1-9d73-61f2921010b9
	  Boot ID:                    e616be63-4a8a-41b8-a0fc-2b1d892a1200
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-h886q       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m13s
	  kube-system                 kube-proxy-hz2nn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m13s (x3 over 3m13s)  kubelet          Node ha-193737-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m13s (x3 over 3m13s)  kubelet          Node ha-193737-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m13s (x3 over 3m13s)  kubelet          Node ha-193737-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m9s                   node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal  RegisteredNode           3m8s                   node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal  RegisteredNode           3m8s                   node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal  NodeReady                2m53s                  kubelet          Node ha-193737-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 1 19:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050773] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037054] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.754509] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.921161] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Oct 1 19:20] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.804167] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.059657] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065329] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.157689] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.148971] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.256595] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +3.897654] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +5.026995] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.059544] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.061605] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.119912] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.150839] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.375138] kauditd_printk_skb: 38 callbacks suppressed
	[Oct 1 19:21] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e] <==
	{"level":"warn","ts":"2024-10-01T19:26:59.575433Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.583159Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.592862Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.597813Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.601594Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.607935Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.615482Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.622398Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.627119Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.631206Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.640140Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.650149Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.651530Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.652522Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.658928Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.659152Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.662230Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.665654Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.669578Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.693011Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.716021Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.760049Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.761806Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.767082Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:26:59.788572Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:26:59 up 7 min,  0 users,  load average: 0.40, 0.34, 0.18
	Linux ha-193737 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525] <==
	I1001 19:26:28.354480       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:26:38.345063       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:26:38.345186       1 main.go:299] handling current node
	I1001 19:26:38.345230       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:26:38.345253       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:26:38.345420       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I1001 19:26:38.345447       1 main.go:322] Node ha-193737-m03 has CIDR [10.244.2.0/24] 
	I1001 19:26:38.345532       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:26:38.345554       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:26:48.348795       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:26:48.348915       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:26:48.349232       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I1001 19:26:48.349245       1 main.go:322] Node ha-193737-m03 has CIDR [10.244.2.0/24] 
	I1001 19:26:48.349309       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:26:48.349316       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:26:48.349384       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:26:48.349392       1 main.go:299] handling current node
	I1001 19:26:58.353065       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:26:58.353567       1 main.go:299] handling current node
	I1001 19:26:58.353642       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:26:58.353908       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:26:58.354113       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I1001 19:26:58.354274       1 main.go:322] Node ha-193737-m03 has CIDR [10.244.2.0/24] 
	I1001 19:26:58.354412       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:26:58.354463       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [d2c57920320eb65f4477cbff8aec818a7ecddb3461a77ca111172e4d1a5f7e71] <==
	I1001 19:20:35.856444       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1001 19:20:35.965501       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1001 19:21:24.240949       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1001 19:21:24.240967       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 17.015µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1001 19:21:24.242740       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1001 19:21:24.244065       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1001 19:21:24.245377       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.686767ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1001 19:23:11.375797       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53914: use of closed network connection
	E1001 19:23:11.551258       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53928: use of closed network connection
	E1001 19:23:11.731362       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53936: use of closed network connection
	E1001 19:23:11.972041       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53954: use of closed network connection
	E1001 19:23:12.366625       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53984: use of closed network connection
	E1001 19:23:12.546073       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54012: use of closed network connection
	E1001 19:23:12.732610       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54022: use of closed network connection
	E1001 19:23:12.902151       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54038: use of closed network connection
	E1001 19:23:13.375286       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54102: use of closed network connection
	E1001 19:23:13.554664       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54126: use of closed network connection
	E1001 19:23:13.743236       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54138: use of closed network connection
	E1001 19:23:13.926913       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54164: use of closed network connection
	E1001 19:23:14.106331       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54176: use of closed network connection
	E1001 19:23:47.033544       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1001 19:23:47.034526       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 71.236µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1001 19:23:47.042011       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1001 19:23:47.046959       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1001 19:23:47.048673       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="15.259067ms" method="POST" path="/api/v1/namespaces/default/events" result=null
	
	
	==> kube-controller-manager [fc9d05172b801a89ec36c0d0d549bfeee1aafb67f6586e3f4f163cb63f01d062] <==
	I1001 19:23:46.953662       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-193737-m04\" does not exist"
	I1001 19:23:46.986878       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-193737-m04" podCIDRs=["10.244.3.0/24"]
	I1001 19:23:46.986941       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:46.987007       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:47.215804       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:47.592799       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:50.155095       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-193737-m04"
	I1001 19:23:50.259908       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:51.578375       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:51.680209       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:51.931826       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:52.014093       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:57.305544       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:24:06.597966       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:24:06.598358       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-193737-m04"
	I1001 19:24:06.614401       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:24:06.949883       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:24:17.699273       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:25:00.186561       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m02"
	I1001 19:25:00.186799       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-193737-m04"
	I1001 19:25:00.216973       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m02"
	I1001 19:25:00.303275       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="68.678995ms"
	I1001 19:25:00.303561       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="142.589µs"
	I1001 19:25:01.983529       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m02"
	I1001 19:25:05.453661       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m02"
	
	
	==> kube-proxy [6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 19:20:37.420079       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 19:20:37.442921       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.14"]
	E1001 19:20:37.443047       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 19:20:37.482251       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 19:20:37.482297       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 19:20:37.482322       1 server_linux.go:169] "Using iptables Proxier"
	I1001 19:20:37.485863       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 19:20:37.486623       1 server.go:483] "Version info" version="v1.31.1"
	I1001 19:20:37.486654       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:20:37.489107       1 config.go:199] "Starting service config controller"
	I1001 19:20:37.489328       1 config.go:105] "Starting endpoint slice config controller"
	I1001 19:20:37.489656       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 19:20:37.489772       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 19:20:37.491468       1 config.go:328] "Starting node config controller"
	I1001 19:20:37.491495       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 19:20:37.590528       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 19:20:37.590619       1 shared_informer.go:320] Caches are synced for service config
	I1001 19:20:37.591994       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7] <==
	E1001 19:20:29.084572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1001 19:20:30.974700       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1001 19:23:06.369501       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rbjkx\": pod busybox-7dff88458-rbjkx is already assigned to node \"ha-193737\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rbjkx" node="ha-193737"
	E1001 19:23:06.370091       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ba3ecbe1-fb88-4674-b679-a442b28cd68e(default/busybox-7dff88458-rbjkx) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-rbjkx"
	E1001 19:23:06.370388       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rbjkx\": pod busybox-7dff88458-rbjkx is already assigned to node \"ha-193737\"" pod="default/busybox-7dff88458-rbjkx"
	I1001 19:23:06.374870       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-rbjkx" node="ha-193737"
	E1001 19:23:06.474319       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-9k8vh is already present in the active queue" pod="default/busybox-7dff88458-9k8vh"
	E1001 19:23:06.510626       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-x4nmn is already present in the active queue" pod="default/busybox-7dff88458-x4nmn"
	E1001 19:23:47.032927       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-tfcsk\": pod kindnet-tfcsk is already assigned to node \"ha-193737-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-tfcsk" node="ha-193737-m04"
	E1001 19:23:47.033064       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-tfcsk\": pod kindnet-tfcsk is already assigned to node \"ha-193737-m04\"" pod="kube-system/kindnet-tfcsk"
	E1001 19:23:47.032927       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-hz2nn\": pod kube-proxy-hz2nn is already assigned to node \"ha-193737-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-hz2nn" node="ha-193737-m04"
	E1001 19:23:47.045815       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4f960179-106c-4201-b54b-eea8c5aea0dc(kube-system/kube-proxy-hz2nn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-hz2nn"
	E1001 19:23:47.046589       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-hz2nn\": pod kube-proxy-hz2nn is already assigned to node \"ha-193737-m04\"" pod="kube-system/kube-proxy-hz2nn"
	I1001 19:23:47.046769       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-hz2nn" node="ha-193737-m04"
	E1001 19:23:47.062993       1 schedule_one.go:953] "Scheduler cache AssumePod failed" err="pod 046c48a4-b41b-4a77-8949-aa553947416b(kube-system/kindnet-h886q) is in the cache, so can't be assumed" pod="kube-system/kindnet-h886q"
	E1001 19:23:47.065004       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="pod 046c48a4-b41b-4a77-8949-aa553947416b(kube-system/kindnet-h886q) is in the cache, so can't be assumed" pod="kube-system/kindnet-h886q"
	I1001 19:23:47.065109       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-h886q" node="ha-193737-m04"
	E1001 19:23:47.081592       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-z5qhk\": pod kube-proxy-z5qhk is already assigned to node \"ha-193737-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-z5qhk" node="ha-193737-m04"
	E1001 19:23:47.081864       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 785d6c85-2697-4f02-80a4-55483a0faa64(kube-system/kube-proxy-z5qhk) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-z5qhk"
	E1001 19:23:47.081920       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-z5qhk\": pod kube-proxy-z5qhk is already assigned to node \"ha-193737-m04\"" pod="kube-system/kube-proxy-z5qhk"
	I1001 19:23:47.083299       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-z5qhk" node="ha-193737-m04"
	E1001 19:23:47.138476       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4q2pc\": pod kindnet-4q2pc is already assigned to node \"ha-193737-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4q2pc" node="ha-193737-m04"
	E1001 19:23:47.138649       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f23b02a5-c64e-44c3-83b9-7192d19a6efc(kube-system/kindnet-4q2pc) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-4q2pc"
	E1001 19:23:47.138779       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4q2pc\": pod kindnet-4q2pc is already assigned to node \"ha-193737-m04\"" pod="kube-system/kindnet-4q2pc"
	I1001 19:23:47.138823       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-4q2pc" node="ha-193737-m04"
	
	
	==> kubelet <==
	Oct 01 19:25:31 ha-193737 kubelet[1313]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 19:25:31 ha-193737 kubelet[1313]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 19:25:31 ha-193737 kubelet[1313]: E1001 19:25:31.112855    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810731112438565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:25:31 ha-193737 kubelet[1313]: E1001 19:25:31.112899    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810731112438565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:25:41 ha-193737 kubelet[1313]: E1001 19:25:41.114457    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810741114104863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:25:41 ha-193737 kubelet[1313]: E1001 19:25:41.114791    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810741114104863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:25:51 ha-193737 kubelet[1313]: E1001 19:25:51.116278    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810751115811001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:25:51 ha-193737 kubelet[1313]: E1001 19:25:51.116653    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810751115811001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:01 ha-193737 kubelet[1313]: E1001 19:26:01.119303    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810761118827447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:01 ha-193737 kubelet[1313]: E1001 19:26:01.119351    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810761118827447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:11 ha-193737 kubelet[1313]: E1001 19:26:11.121360    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810771121035313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:11 ha-193737 kubelet[1313]: E1001 19:26:11.121412    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810771121035313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:21 ha-193737 kubelet[1313]: E1001 19:26:21.123512    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810781123120430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:21 ha-193737 kubelet[1313]: E1001 19:26:21.123938    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810781123120430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:31 ha-193737 kubelet[1313]: E1001 19:26:31.044582    1313 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 19:26:31 ha-193737 kubelet[1313]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 19:26:31 ha-193737 kubelet[1313]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 19:26:31 ha-193737 kubelet[1313]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 19:26:31 ha-193737 kubelet[1313]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 19:26:31 ha-193737 kubelet[1313]: E1001 19:26:31.126194    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810791125910385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:31 ha-193737 kubelet[1313]: E1001 19:26:31.126217    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810791125910385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:41 ha-193737 kubelet[1313]: E1001 19:26:41.128087    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810801127576002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:41 ha-193737 kubelet[1313]: E1001 19:26:41.128431    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810801127576002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:51 ha-193737 kubelet[1313]: E1001 19:26:51.130945    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810811130429680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:51 ha-193737 kubelet[1313]: E1001 19:26:51.131267    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810811130429680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-193737 -n ha-193737
helpers_test.go:261: (dbg) Run:  kubectl --context ha-193737 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1001 19:27:02.544556   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.95116593s)
ha_test.go:309: expected profile "ha-193737" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-193737\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-193737\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-193737\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.14\",\"Port\":8443,\"Kuberne
tesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.27\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.101\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.152\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"me
tallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":2
62144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-193737 -n ha-193737
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-193737 logs -n 25: (1.433544778s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m03:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737:/home/docker/cp-test_ha-193737-m03_ha-193737.txt                       |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737 sudo cat                                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m03_ha-193737.txt                                 |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m03:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m02:/home/docker/cp-test_ha-193737-m03_ha-193737-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737-m02 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m03_ha-193737-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m03:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04:/home/docker/cp-test_ha-193737-m03_ha-193737-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737-m04 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m03_ha-193737-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-193737 cp testdata/cp-test.txt                                                | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1122815483/001/cp-test_ha-193737-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737:/home/docker/cp-test_ha-193737-m04_ha-193737.txt                       |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737 sudo cat                                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m04_ha-193737.txt                                 |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m02:/home/docker/cp-test_ha-193737-m04_ha-193737-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737-m02 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m04_ha-193737-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03:/home/docker/cp-test_ha-193737-m04_ha-193737-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737-m03 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m04_ha-193737-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-193737 node stop m02 -v=7                                                     | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-193737 node start m02 -v=7                                                    | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 19:19:47
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 19:19:47.806967   31154 out.go:345] Setting OutFile to fd 1 ...
	I1001 19:19:47.807072   31154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:19:47.807081   31154 out.go:358] Setting ErrFile to fd 2...
	I1001 19:19:47.807085   31154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:19:47.807300   31154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 19:19:47.807883   31154 out.go:352] Setting JSON to false
	I1001 19:19:47.808862   31154 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3730,"bootTime":1727806658,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 19:19:47.808959   31154 start.go:139] virtualization: kvm guest
	I1001 19:19:47.810915   31154 out.go:177] * [ha-193737] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 19:19:47.812033   31154 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 19:19:47.812047   31154 notify.go:220] Checking for updates...
	I1001 19:19:47.814140   31154 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 19:19:47.815207   31154 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:19:47.816467   31154 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:19:47.817736   31154 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 19:19:47.818886   31154 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 19:19:47.820159   31154 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 19:19:47.855456   31154 out.go:177] * Using the kvm2 driver based on user configuration
	I1001 19:19:47.856527   31154 start.go:297] selected driver: kvm2
	I1001 19:19:47.856547   31154 start.go:901] validating driver "kvm2" against <nil>
	I1001 19:19:47.856562   31154 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 19:19:47.857294   31154 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 19:19:47.857376   31154 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 19:19:47.872487   31154 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 19:19:47.872546   31154 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 19:19:47.872796   31154 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 19:19:47.872826   31154 cni.go:84] Creating CNI manager for ""
	I1001 19:19:47.872874   31154 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1001 19:19:47.872886   31154 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 19:19:47.872938   31154 start.go:340] cluster config:
	{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1001 19:19:47.873050   31154 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 19:19:47.874719   31154 out.go:177] * Starting "ha-193737" primary control-plane node in "ha-193737" cluster
	I1001 19:19:47.875804   31154 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:19:47.875840   31154 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 19:19:47.875850   31154 cache.go:56] Caching tarball of preloaded images
	I1001 19:19:47.875957   31154 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 19:19:47.875970   31154 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 19:19:47.876255   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:19:47.876273   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json: {Name:mk44677a1f0c01c3be022903d4a146ca8f437dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:19:47.876454   31154 start.go:360] acquireMachinesLock for ha-193737: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 19:19:47.876490   31154 start.go:364] duration metric: took 20.799µs to acquireMachinesLock for "ha-193737"
	I1001 19:19:47.876512   31154 start.go:93] Provisioning new machine with config: &{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:19:47.876581   31154 start.go:125] createHost starting for "" (driver="kvm2")
	I1001 19:19:47.878132   31154 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 19:19:47.878257   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:19:47.878301   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:19:47.892637   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32839
	I1001 19:19:47.893161   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:19:47.893766   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:19:47.893788   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:19:47.894083   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:19:47.894225   31154 main.go:141] libmachine: (ha-193737) Calling .GetMachineName
	I1001 19:19:47.894350   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:19:47.894482   31154 start.go:159] libmachine.API.Create for "ha-193737" (driver="kvm2")
	I1001 19:19:47.894506   31154 client.go:168] LocalClient.Create starting
	I1001 19:19:47.894539   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem
	I1001 19:19:47.894575   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:19:47.894607   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:19:47.894667   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem
	I1001 19:19:47.894686   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:19:47.894699   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:19:47.894713   31154 main.go:141] libmachine: Running pre-create checks...
	I1001 19:19:47.894730   31154 main.go:141] libmachine: (ha-193737) Calling .PreCreateCheck
	I1001 19:19:47.895057   31154 main.go:141] libmachine: (ha-193737) Calling .GetConfigRaw
	I1001 19:19:47.895392   31154 main.go:141] libmachine: Creating machine...
	I1001 19:19:47.895405   31154 main.go:141] libmachine: (ha-193737) Calling .Create
	I1001 19:19:47.895568   31154 main.go:141] libmachine: (ha-193737) Creating KVM machine...
	I1001 19:19:47.896749   31154 main.go:141] libmachine: (ha-193737) DBG | found existing default KVM network
	I1001 19:19:47.897409   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:47.897251   31177 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I1001 19:19:47.897459   31154 main.go:141] libmachine: (ha-193737) DBG | created network xml: 
	I1001 19:19:47.897477   31154 main.go:141] libmachine: (ha-193737) DBG | <network>
	I1001 19:19:47.897495   31154 main.go:141] libmachine: (ha-193737) DBG |   <name>mk-ha-193737</name>
	I1001 19:19:47.897509   31154 main.go:141] libmachine: (ha-193737) DBG |   <dns enable='no'/>
	I1001 19:19:47.897529   31154 main.go:141] libmachine: (ha-193737) DBG |   
	I1001 19:19:47.897549   31154 main.go:141] libmachine: (ha-193737) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1001 19:19:47.897562   31154 main.go:141] libmachine: (ha-193737) DBG |     <dhcp>
	I1001 19:19:47.897573   31154 main.go:141] libmachine: (ha-193737) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1001 19:19:47.897582   31154 main.go:141] libmachine: (ha-193737) DBG |     </dhcp>
	I1001 19:19:47.897589   31154 main.go:141] libmachine: (ha-193737) DBG |   </ip>
	I1001 19:19:47.897594   31154 main.go:141] libmachine: (ha-193737) DBG |   
	I1001 19:19:47.897599   31154 main.go:141] libmachine: (ha-193737) DBG | </network>
	I1001 19:19:47.897608   31154 main.go:141] libmachine: (ha-193737) DBG | 
	I1001 19:19:47.902355   31154 main.go:141] libmachine: (ha-193737) DBG | trying to create private KVM network mk-ha-193737 192.168.39.0/24...
	I1001 19:19:47.965826   31154 main.go:141] libmachine: (ha-193737) DBG | private KVM network mk-ha-193737 192.168.39.0/24 created
	I1001 19:19:47.965857   31154 main.go:141] libmachine: (ha-193737) Setting up store path in /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737 ...
	I1001 19:19:47.965875   31154 main.go:141] libmachine: (ha-193737) Building disk image from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 19:19:47.965943   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:47.965838   31177 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:19:47.966014   31154 main.go:141] libmachine: (ha-193737) Downloading /home/jenkins/minikube-integration/19736-11198/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 19:19:48.225463   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:48.225322   31177 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa...
	I1001 19:19:48.498755   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:48.498602   31177 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/ha-193737.rawdisk...
	I1001 19:19:48.498778   31154 main.go:141] libmachine: (ha-193737) DBG | Writing magic tar header
	I1001 19:19:48.498788   31154 main.go:141] libmachine: (ha-193737) DBG | Writing SSH key tar header
	I1001 19:19:48.498813   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:48.498738   31177 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737 ...
	I1001 19:19:48.498825   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737
	I1001 19:19:48.498844   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737 (perms=drwx------)
	I1001 19:19:48.498866   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines (perms=drwxr-xr-x)
	I1001 19:19:48.498875   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube (perms=drwxr-xr-x)
	I1001 19:19:48.498909   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines
	I1001 19:19:48.498961   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198 (perms=drwxrwxr-x)
	I1001 19:19:48.498975   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:19:48.498992   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198
	I1001 19:19:48.499012   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 19:19:48.499035   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 19:19:48.499048   31154 main.go:141] libmachine: (ha-193737) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 19:19:48.499056   31154 main.go:141] libmachine: (ha-193737) Creating domain...
	I1001 19:19:48.499066   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home/jenkins
	I1001 19:19:48.499074   31154 main.go:141] libmachine: (ha-193737) DBG | Checking permissions on dir: /home
	I1001 19:19:48.499095   31154 main.go:141] libmachine: (ha-193737) DBG | Skipping /home - not owner
	I1001 19:19:48.500091   31154 main.go:141] libmachine: (ha-193737) define libvirt domain using xml: 
	I1001 19:19:48.500110   31154 main.go:141] libmachine: (ha-193737) <domain type='kvm'>
	I1001 19:19:48.500119   31154 main.go:141] libmachine: (ha-193737)   <name>ha-193737</name>
	I1001 19:19:48.500128   31154 main.go:141] libmachine: (ha-193737)   <memory unit='MiB'>2200</memory>
	I1001 19:19:48.500140   31154 main.go:141] libmachine: (ha-193737)   <vcpu>2</vcpu>
	I1001 19:19:48.500149   31154 main.go:141] libmachine: (ha-193737)   <features>
	I1001 19:19:48.500155   31154 main.go:141] libmachine: (ha-193737)     <acpi/>
	I1001 19:19:48.500161   31154 main.go:141] libmachine: (ha-193737)     <apic/>
	I1001 19:19:48.500166   31154 main.go:141] libmachine: (ha-193737)     <pae/>
	I1001 19:19:48.500178   31154 main.go:141] libmachine: (ha-193737)     
	I1001 19:19:48.500186   31154 main.go:141] libmachine: (ha-193737)   </features>
	I1001 19:19:48.500190   31154 main.go:141] libmachine: (ha-193737)   <cpu mode='host-passthrough'>
	I1001 19:19:48.500271   31154 main.go:141] libmachine: (ha-193737)   
	I1001 19:19:48.500322   31154 main.go:141] libmachine: (ha-193737)   </cpu>
	I1001 19:19:48.500344   31154 main.go:141] libmachine: (ha-193737)   <os>
	I1001 19:19:48.500376   31154 main.go:141] libmachine: (ha-193737)     <type>hvm</type>
	I1001 19:19:48.500385   31154 main.go:141] libmachine: (ha-193737)     <boot dev='cdrom'/>
	I1001 19:19:48.500394   31154 main.go:141] libmachine: (ha-193737)     <boot dev='hd'/>
	I1001 19:19:48.500402   31154 main.go:141] libmachine: (ha-193737)     <bootmenu enable='no'/>
	I1001 19:19:48.500407   31154 main.go:141] libmachine: (ha-193737)   </os>
	I1001 19:19:48.500422   31154 main.go:141] libmachine: (ha-193737)   <devices>
	I1001 19:19:48.500428   31154 main.go:141] libmachine: (ha-193737)     <disk type='file' device='cdrom'>
	I1001 19:19:48.500438   31154 main.go:141] libmachine: (ha-193737)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/boot2docker.iso'/>
	I1001 19:19:48.500448   31154 main.go:141] libmachine: (ha-193737)       <target dev='hdc' bus='scsi'/>
	I1001 19:19:48.500454   31154 main.go:141] libmachine: (ha-193737)       <readonly/>
	I1001 19:19:48.500461   31154 main.go:141] libmachine: (ha-193737)     </disk>
	I1001 19:19:48.500475   31154 main.go:141] libmachine: (ha-193737)     <disk type='file' device='disk'>
	I1001 19:19:48.500485   31154 main.go:141] libmachine: (ha-193737)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 19:19:48.500507   31154 main.go:141] libmachine: (ha-193737)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/ha-193737.rawdisk'/>
	I1001 19:19:48.500514   31154 main.go:141] libmachine: (ha-193737)       <target dev='hda' bus='virtio'/>
	I1001 19:19:48.500519   31154 main.go:141] libmachine: (ha-193737)     </disk>
	I1001 19:19:48.500525   31154 main.go:141] libmachine: (ha-193737)     <interface type='network'>
	I1001 19:19:48.500530   31154 main.go:141] libmachine: (ha-193737)       <source network='mk-ha-193737'/>
	I1001 19:19:48.500536   31154 main.go:141] libmachine: (ha-193737)       <model type='virtio'/>
	I1001 19:19:48.500541   31154 main.go:141] libmachine: (ha-193737)     </interface>
	I1001 19:19:48.500547   31154 main.go:141] libmachine: (ha-193737)     <interface type='network'>
	I1001 19:19:48.500552   31154 main.go:141] libmachine: (ha-193737)       <source network='default'/>
	I1001 19:19:48.500558   31154 main.go:141] libmachine: (ha-193737)       <model type='virtio'/>
	I1001 19:19:48.500570   31154 main.go:141] libmachine: (ha-193737)     </interface>
	I1001 19:19:48.500593   31154 main.go:141] libmachine: (ha-193737)     <serial type='pty'>
	I1001 19:19:48.500606   31154 main.go:141] libmachine: (ha-193737)       <target port='0'/>
	I1001 19:19:48.500616   31154 main.go:141] libmachine: (ha-193737)     </serial>
	I1001 19:19:48.500621   31154 main.go:141] libmachine: (ha-193737)     <console type='pty'>
	I1001 19:19:48.500632   31154 main.go:141] libmachine: (ha-193737)       <target type='serial' port='0'/>
	I1001 19:19:48.500644   31154 main.go:141] libmachine: (ha-193737)     </console>
	I1001 19:19:48.500651   31154 main.go:141] libmachine: (ha-193737)     <rng model='virtio'>
	I1001 19:19:48.500662   31154 main.go:141] libmachine: (ha-193737)       <backend model='random'>/dev/random</backend>
	I1001 19:19:48.500669   31154 main.go:141] libmachine: (ha-193737)     </rng>
	I1001 19:19:48.500674   31154 main.go:141] libmachine: (ha-193737)     
	I1001 19:19:48.500681   31154 main.go:141] libmachine: (ha-193737)     
	I1001 19:19:48.500687   31154 main.go:141] libmachine: (ha-193737)   </devices>
	I1001 19:19:48.500693   31154 main.go:141] libmachine: (ha-193737) </domain>
	I1001 19:19:48.500703   31154 main.go:141] libmachine: (ha-193737) 
	I1001 19:19:48.505062   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:e8:37:5d in network default
	I1001 19:19:48.505636   31154 main.go:141] libmachine: (ha-193737) Ensuring networks are active...
	I1001 19:19:48.505675   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:48.506541   31154 main.go:141] libmachine: (ha-193737) Ensuring network default is active
	I1001 19:19:48.506813   31154 main.go:141] libmachine: (ha-193737) Ensuring network mk-ha-193737 is active
	I1001 19:19:48.507255   31154 main.go:141] libmachine: (ha-193737) Getting domain xml...
	I1001 19:19:48.507904   31154 main.go:141] libmachine: (ha-193737) Creating domain...
	I1001 19:19:49.716659   31154 main.go:141] libmachine: (ha-193737) Waiting to get IP...
	I1001 19:19:49.717406   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:49.717831   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:49.717883   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:49.717825   31177 retry.go:31] will retry after 192.827447ms: waiting for machine to come up
	I1001 19:19:49.912407   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:49.912907   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:49.912957   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:49.912879   31177 retry.go:31] will retry after 258.269769ms: waiting for machine to come up
	I1001 19:19:50.172507   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:50.173033   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:50.173054   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:50.172948   31177 retry.go:31] will retry after 373.637188ms: waiting for machine to come up
	I1001 19:19:50.548615   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:50.549181   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:50.549210   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:50.549112   31177 retry.go:31] will retry after 430.626472ms: waiting for machine to come up
	I1001 19:19:50.981709   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:50.982164   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:50.982197   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:50.982117   31177 retry.go:31] will retry after 529.86174ms: waiting for machine to come up
	I1001 19:19:51.513872   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:51.514354   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:51.514379   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:51.514310   31177 retry.go:31] will retry after 925.92584ms: waiting for machine to come up
	I1001 19:19:52.441513   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:52.442015   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:52.442079   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:52.441913   31177 retry.go:31] will retry after 1.034076263s: waiting for machine to come up
	I1001 19:19:53.477995   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:53.478427   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:53.478449   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:53.478392   31177 retry.go:31] will retry after 1.13194403s: waiting for machine to come up
	I1001 19:19:54.612551   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:54.613118   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:54.613140   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:54.613054   31177 retry.go:31] will retry after 1.647034063s: waiting for machine to come up
	I1001 19:19:56.262733   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:56.263161   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:56.263186   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:56.263102   31177 retry.go:31] will retry after 1.500997099s: waiting for machine to come up
	I1001 19:19:57.765863   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:19:57.766323   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:19:57.766356   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:19:57.766274   31177 retry.go:31] will retry after 2.455749683s: waiting for machine to come up
	I1001 19:20:00.223334   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:00.223743   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:20:00.223759   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:20:00.223705   31177 retry.go:31] will retry after 2.437856543s: waiting for machine to come up
	I1001 19:20:02.664433   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:02.664809   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:20:02.664832   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:20:02.664763   31177 retry.go:31] will retry after 3.902681899s: waiting for machine to come up
	I1001 19:20:06.571440   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:06.571775   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find current IP address of domain ha-193737 in network mk-ha-193737
	I1001 19:20:06.571797   31154 main.go:141] libmachine: (ha-193737) DBG | I1001 19:20:06.571730   31177 retry.go:31] will retry after 5.423043301s: waiting for machine to come up
	I1001 19:20:11.999360   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:11.999779   31154 main.go:141] libmachine: (ha-193737) Found IP for machine: 192.168.39.14
	I1001 19:20:11.999815   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has current primary IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:11.999824   31154 main.go:141] libmachine: (ha-193737) Reserving static IP address...
	I1001 19:20:12.000199   31154 main.go:141] libmachine: (ha-193737) DBG | unable to find host DHCP lease matching {name: "ha-193737", mac: "52:54:00:80:2b:09", ip: "192.168.39.14"} in network mk-ha-193737
	I1001 19:20:12.077653   31154 main.go:141] libmachine: (ha-193737) Reserved static IP address: 192.168.39.14
	I1001 19:20:12.077732   31154 main.go:141] libmachine: (ha-193737) DBG | Getting to WaitForSSH function...
	I1001 19:20:12.077743   31154 main.go:141] libmachine: (ha-193737) Waiting for SSH to be available...
	I1001 19:20:12.080321   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.080865   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:minikube Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.080898   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.081006   31154 main.go:141] libmachine: (ha-193737) DBG | Using SSH client type: external
	I1001 19:20:12.081047   31154 main.go:141] libmachine: (ha-193737) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa (-rw-------)
	I1001 19:20:12.081075   31154 main.go:141] libmachine: (ha-193737) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 19:20:12.081085   31154 main.go:141] libmachine: (ha-193737) DBG | About to run SSH command:
	I1001 19:20:12.081096   31154 main.go:141] libmachine: (ha-193737) DBG | exit 0
	I1001 19:20:12.208487   31154 main.go:141] libmachine: (ha-193737) DBG | SSH cmd err, output: <nil>: 
	I1001 19:20:12.208725   31154 main.go:141] libmachine: (ha-193737) KVM machine creation complete!
	I1001 19:20:12.209102   31154 main.go:141] libmachine: (ha-193737) Calling .GetConfigRaw
	I1001 19:20:12.209646   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:12.209809   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:12.209935   31154 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 19:20:12.209949   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:20:12.211166   31154 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 19:20:12.211190   31154 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 19:20:12.211195   31154 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 19:20:12.211201   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.213529   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.213857   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.213883   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.213972   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:12.214116   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.214264   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.214394   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:12.214556   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:12.214781   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:12.214795   31154 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 19:20:12.319892   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:20:12.319913   31154 main.go:141] libmachine: Detecting the provisioner...
	I1001 19:20:12.319921   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.322718   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.323165   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.323192   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.323331   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:12.323522   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.323695   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.323840   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:12.324072   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:12.324284   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:12.324296   31154 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 19:20:12.429264   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 19:20:12.429335   31154 main.go:141] libmachine: found compatible host: buildroot
	I1001 19:20:12.429344   31154 main.go:141] libmachine: Provisioning with buildroot...
	I1001 19:20:12.429358   31154 main.go:141] libmachine: (ha-193737) Calling .GetMachineName
	I1001 19:20:12.429572   31154 buildroot.go:166] provisioning hostname "ha-193737"
	I1001 19:20:12.429594   31154 main.go:141] libmachine: (ha-193737) Calling .GetMachineName
	I1001 19:20:12.429736   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.432551   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.432897   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.432926   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.433127   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:12.433317   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.433512   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.433661   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:12.433801   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:12.433993   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:12.434007   31154 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-193737 && echo "ha-193737" | sudo tee /etc/hostname
	I1001 19:20:12.557230   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-193737
	
	I1001 19:20:12.557264   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.560034   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.560377   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.560404   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.560580   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:12.560736   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.560897   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.561023   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:12.561173   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:12.561344   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:12.561360   31154 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-193737' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-193737/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-193737' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 19:20:12.673716   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:20:12.673759   31154 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 19:20:12.673797   31154 buildroot.go:174] setting up certificates
	I1001 19:20:12.673811   31154 provision.go:84] configureAuth start
	I1001 19:20:12.673825   31154 main.go:141] libmachine: (ha-193737) Calling .GetMachineName
	I1001 19:20:12.674136   31154 main.go:141] libmachine: (ha-193737) Calling .GetIP
	I1001 19:20:12.676892   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.677280   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.677321   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.677483   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.679978   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.680305   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.680326   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.680487   31154 provision.go:143] copyHostCerts
	I1001 19:20:12.680516   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:20:12.680561   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 19:20:12.680573   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:20:12.680654   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 19:20:12.680751   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:20:12.680775   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 19:20:12.680787   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:20:12.680824   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 19:20:12.680885   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:20:12.680909   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 19:20:12.680917   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:20:12.680951   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 19:20:12.681013   31154 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.ha-193737 san=[127.0.0.1 192.168.39.14 ha-193737 localhost minikube]
	I1001 19:20:12.842484   31154 provision.go:177] copyRemoteCerts
	I1001 19:20:12.842574   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 19:20:12.842621   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:12.845898   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.846287   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:12.846310   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:12.846561   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:12.846731   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:12.846941   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:12.847077   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:12.930698   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 19:20:12.930795   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 19:20:12.955852   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 19:20:12.955930   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1001 19:20:12.979656   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 19:20:12.979722   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 19:20:13.003473   31154 provision.go:87] duration metric: took 329.649424ms to configureAuth
	I1001 19:20:13.003500   31154 buildroot.go:189] setting minikube options for container-runtime
	I1001 19:20:13.003695   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:20:13.003768   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:13.006651   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.006965   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.006994   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.007204   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:13.007396   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.007569   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.007765   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:13.007963   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:13.008170   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:13.008194   31154 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 19:20:13.223895   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 19:20:13.223928   31154 main.go:141] libmachine: Checking connection to Docker...
	I1001 19:20:13.223938   31154 main.go:141] libmachine: (ha-193737) Calling .GetURL
	I1001 19:20:13.225295   31154 main.go:141] libmachine: (ha-193737) DBG | Using libvirt version 6000000
	I1001 19:20:13.227525   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.227866   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.227899   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.227999   31154 main.go:141] libmachine: Docker is up and running!
	I1001 19:20:13.228014   31154 main.go:141] libmachine: Reticulating splines...
	I1001 19:20:13.228022   31154 client.go:171] duration metric: took 25.333507515s to LocalClient.Create
	I1001 19:20:13.228041   31154 start.go:167] duration metric: took 25.333560566s to libmachine.API.Create "ha-193737"
	I1001 19:20:13.228050   31154 start.go:293] postStartSetup for "ha-193737" (driver="kvm2")
	I1001 19:20:13.228060   31154 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 19:20:13.228083   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:13.228317   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 19:20:13.228339   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:13.230391   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.230709   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.230732   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.230837   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:13.230988   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.231120   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:13.231290   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:13.314353   31154 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 19:20:13.318432   31154 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 19:20:13.318458   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 19:20:13.318541   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 19:20:13.318638   31154 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 19:20:13.318652   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /etc/ssl/certs/184302.pem
	I1001 19:20:13.318780   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 19:20:13.328571   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:20:13.353035   31154 start.go:296] duration metric: took 124.970717ms for postStartSetup
	I1001 19:20:13.353110   31154 main.go:141] libmachine: (ha-193737) Calling .GetConfigRaw
	I1001 19:20:13.353736   31154 main.go:141] libmachine: (ha-193737) Calling .GetIP
	I1001 19:20:13.356423   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.356817   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.356852   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.357086   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:20:13.357278   31154 start.go:128] duration metric: took 25.480687424s to createHost
	I1001 19:20:13.357297   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:13.359783   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.360160   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.360189   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.360384   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:13.360591   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.360774   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.360932   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:13.361105   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:13.361274   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:20:13.361289   31154 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 19:20:13.464991   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727810413.446268696
	
	I1001 19:20:13.465023   31154 fix.go:216] guest clock: 1727810413.446268696
	I1001 19:20:13.465037   31154 fix.go:229] Guest: 2024-10-01 19:20:13.446268696 +0000 UTC Remote: 2024-10-01 19:20:13.35728811 +0000 UTC m=+25.585126920 (delta=88.980586ms)
	I1001 19:20:13.465070   31154 fix.go:200] guest clock delta is within tolerance: 88.980586ms
	I1001 19:20:13.465076   31154 start.go:83] releasing machines lock for "ha-193737", held for 25.588575039s
	I1001 19:20:13.465101   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:13.465340   31154 main.go:141] libmachine: (ha-193737) Calling .GetIP
	I1001 19:20:13.468083   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.468419   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.468447   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.468613   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:13.469143   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:13.469301   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:13.469362   31154 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 19:20:13.469413   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:13.469528   31154 ssh_runner.go:195] Run: cat /version.json
	I1001 19:20:13.469548   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:13.471980   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.472049   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.472309   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.472339   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.472393   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:13.472414   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:13.472482   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:13.472622   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:13.472666   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.472784   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:13.472833   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:13.472925   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:13.472991   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:13.473062   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:13.597462   31154 ssh_runner.go:195] Run: systemctl --version
	I1001 19:20:13.603452   31154 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 19:20:13.764276   31154 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 19:20:13.770676   31154 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 19:20:13.770753   31154 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 19:20:13.785990   31154 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 19:20:13.786018   31154 start.go:495] detecting cgroup driver to use...
	I1001 19:20:13.786088   31154 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 19:20:13.802042   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 19:20:13.815442   31154 docker.go:217] disabling cri-docker service (if available) ...
	I1001 19:20:13.815514   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 19:20:13.829012   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 19:20:13.842769   31154 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 19:20:13.956694   31154 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 19:20:14.102874   31154 docker.go:233] disabling docker service ...
	I1001 19:20:14.102940   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 19:20:14.117261   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 19:20:14.129985   31154 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 19:20:14.273597   31154 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 19:20:14.384529   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 19:20:14.397753   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 19:20:14.415792   31154 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 19:20:14.415850   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.426007   31154 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 19:20:14.426087   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.436393   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.446247   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.456029   31154 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 19:20:14.466078   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.475781   31154 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.492551   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:14.502706   31154 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 19:20:14.512290   31154 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 19:20:14.512379   31154 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 19:20:14.525913   31154 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 19:20:14.535543   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:20:14.653960   31154 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 19:20:14.741173   31154 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 19:20:14.741263   31154 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 19:20:14.745800   31154 start.go:563] Will wait 60s for crictl version
	I1001 19:20:14.745869   31154 ssh_runner.go:195] Run: which crictl
	I1001 19:20:14.749449   31154 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 19:20:14.789074   31154 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 19:20:14.789159   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:20:14.820545   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:20:14.849920   31154 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 19:20:14.850894   31154 main.go:141] libmachine: (ha-193737) Calling .GetIP
	I1001 19:20:14.853389   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:14.853698   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:14.853724   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:14.853935   31154 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 19:20:14.857967   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:20:14.870673   31154 kubeadm.go:883] updating cluster {Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 19:20:14.870794   31154 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:20:14.870846   31154 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 19:20:14.901722   31154 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1001 19:20:14.901791   31154 ssh_runner.go:195] Run: which lz4
	I1001 19:20:14.905716   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1001 19:20:14.905869   31154 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 19:20:14.909954   31154 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 19:20:14.909985   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1001 19:20:16.176019   31154 crio.go:462] duration metric: took 1.270229445s to copy over tarball
	I1001 19:20:16.176091   31154 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 19:20:18.196924   31154 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.020807915s)
	I1001 19:20:18.196955   31154 crio.go:469] duration metric: took 2.020904101s to extract the tarball
	I1001 19:20:18.196963   31154 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 19:20:18.232395   31154 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 19:20:18.277292   31154 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 19:20:18.277310   31154 cache_images.go:84] Images are preloaded, skipping loading
	I1001 19:20:18.277317   31154 kubeadm.go:934] updating node { 192.168.39.14 8443 v1.31.1 crio true true} ...
	I1001 19:20:18.277404   31154 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-193737 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 19:20:18.277469   31154 ssh_runner.go:195] Run: crio config
	I1001 19:20:18.320909   31154 cni.go:84] Creating CNI manager for ""
	I1001 19:20:18.320940   31154 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1001 19:20:18.320955   31154 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 19:20:18.320983   31154 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.14 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-193737 NodeName:ha-193737 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 19:20:18.321130   31154 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-193737"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 19:20:18.321154   31154 kube-vip.go:115] generating kube-vip config ...
	I1001 19:20:18.321192   31154 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 19:20:18.337979   31154 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 19:20:18.338099   31154 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1001 19:20:18.338161   31154 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 19:20:18.347788   31154 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 19:20:18.347864   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1001 19:20:18.356907   31154 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1001 19:20:18.372922   31154 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 19:20:18.388904   31154 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1001 19:20:18.404938   31154 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1001 19:20:18.421257   31154 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 19:20:18.425122   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:20:18.436829   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:20:18.545073   31154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:20:18.560862   31154 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737 for IP: 192.168.39.14
	I1001 19:20:18.560887   31154 certs.go:194] generating shared ca certs ...
	I1001 19:20:18.560910   31154 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:18.561104   31154 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 19:20:18.561167   31154 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 19:20:18.561182   31154 certs.go:256] generating profile certs ...
	I1001 19:20:18.561249   31154 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key
	I1001 19:20:18.561277   31154 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt with IP's: []
	I1001 19:20:19.147252   31154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt ...
	I1001 19:20:19.147288   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt: {Name:mk6cc12194e2b1b488446b45fb57531c12b19cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.147481   31154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key ...
	I1001 19:20:19.147500   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key: {Name:mk1f7ee6c9ea6b8fcc952a031324588416a57469 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.147599   31154 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.c3487d2e
	I1001 19:20:19.147622   31154 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.c3487d2e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.14 192.168.39.254]
	I1001 19:20:19.274032   31154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.c3487d2e ...
	I1001 19:20:19.274061   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.c3487d2e: {Name:mk19f3cf4cd1f2fca54e40738408be6aa73421ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.274224   31154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.c3487d2e ...
	I1001 19:20:19.274242   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.c3487d2e: {Name:mk2ba24a36a70c8a6e47019bdcda573a26500b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.274335   31154 certs.go:381] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.c3487d2e -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt
	I1001 19:20:19.274441   31154 certs.go:385] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.c3487d2e -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key
	I1001 19:20:19.274522   31154 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key
	I1001 19:20:19.274541   31154 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt with IP's: []
	I1001 19:20:19.432987   31154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt ...
	I1001 19:20:19.433018   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt: {Name:mkaa29f743f43e700e39d0141b3a793971db9bab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.433198   31154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key ...
	I1001 19:20:19.433215   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key: {Name:mkda8f4e7f39ac52933dd1a3f0855317051465de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:19.433333   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 19:20:19.433358   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 19:20:19.433374   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 19:20:19.433394   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 19:20:19.433411   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 19:20:19.433428   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 19:20:19.433441   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 19:20:19.433457   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 19:20:19.433541   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 19:20:19.433593   31154 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 19:20:19.433606   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 19:20:19.433643   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 19:20:19.433673   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 19:20:19.433703   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 19:20:19.433758   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:20:19.433792   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem -> /usr/share/ca-certificates/18430.pem
	I1001 19:20:19.433812   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /usr/share/ca-certificates/184302.pem
	I1001 19:20:19.433830   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:20:19.434414   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 19:20:19.462971   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 19:20:19.486817   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 19:20:19.510214   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 19:20:19.536715   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1001 19:20:19.562219   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 19:20:19.587563   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 19:20:19.611975   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 19:20:19.635789   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 19:20:19.660541   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 19:20:19.686922   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 19:20:19.713247   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 19:20:19.737109   31154 ssh_runner.go:195] Run: openssl version
	I1001 19:20:19.743466   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 19:20:19.755116   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 19:20:19.760240   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 19:20:19.760326   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 19:20:19.767474   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 19:20:19.779182   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 19:20:19.790431   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 19:20:19.795533   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 19:20:19.795593   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 19:20:19.801533   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 19:20:19.812537   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 19:20:19.823577   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:20:19.828798   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:20:19.828870   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:20:19.835152   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 19:20:19.846376   31154 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 19:20:19.850628   31154 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 19:20:19.850680   31154 kubeadm.go:392] StartCluster: {Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:20:19.850761   31154 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 19:20:19.850812   31154 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 19:20:19.892830   31154 cri.go:89] found id: ""
	I1001 19:20:19.892895   31154 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 19:20:19.902960   31154 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 19:20:19.913367   31154 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 19:20:19.923292   31154 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 19:20:19.923330   31154 kubeadm.go:157] found existing configuration files:
	
	I1001 19:20:19.923388   31154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 19:20:19.932878   31154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 19:20:19.932945   31154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 19:20:19.943333   31154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 19:20:19.952676   31154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 19:20:19.952738   31154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 19:20:19.962992   31154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 19:20:19.972649   31154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 19:20:19.972735   31154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 19:20:19.982834   31154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 19:20:19.993409   31154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 19:20:19.993469   31154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 19:20:20.002988   31154 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 19:20:20.127435   31154 kubeadm.go:310] W1001 19:20:20.114172     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 19:20:20.128326   31154 kubeadm.go:310] W1001 19:20:20.115365     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 19:20:20.262781   31154 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 19:20:31.543814   31154 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 19:20:31.543907   31154 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 19:20:31.543995   31154 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 19:20:31.544073   31154 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 19:20:31.544148   31154 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 19:20:31.544203   31154 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 19:20:31.545532   31154 out.go:235]   - Generating certificates and keys ...
	I1001 19:20:31.545611   31154 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 19:20:31.545691   31154 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 19:20:31.545778   31154 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 19:20:31.545854   31154 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 19:20:31.545932   31154 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 19:20:31.546012   31154 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 19:20:31.546085   31154 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 19:20:31.546175   31154 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-193737 localhost] and IPs [192.168.39.14 127.0.0.1 ::1]
	I1001 19:20:31.546218   31154 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 19:20:31.546369   31154 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-193737 localhost] and IPs [192.168.39.14 127.0.0.1 ::1]
	I1001 19:20:31.546436   31154 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 19:20:31.546488   31154 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 19:20:31.546527   31154 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 19:20:31.546577   31154 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 19:20:31.546623   31154 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 19:20:31.546668   31154 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 19:20:31.546722   31154 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 19:20:31.546817   31154 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 19:20:31.546863   31154 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 19:20:31.546932   31154 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 19:20:31.547004   31154 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 19:20:31.549095   31154 out.go:235]   - Booting up control plane ...
	I1001 19:20:31.549193   31154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 19:20:31.549275   31154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 19:20:31.549365   31154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 19:20:31.549456   31154 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 19:20:31.549553   31154 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 19:20:31.549596   31154 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 19:20:31.549707   31154 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 19:20:31.549790   31154 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 19:20:31.549840   31154 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.357694ms
	I1001 19:20:31.549900   31154 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 19:20:31.549947   31154 kubeadm.go:310] [api-check] The API server is healthy after 6.04683454s
	I1001 19:20:31.550033   31154 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 19:20:31.550189   31154 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 19:20:31.550277   31154 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 19:20:31.550430   31154 kubeadm.go:310] [mark-control-plane] Marking the node ha-193737 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 19:20:31.550487   31154 kubeadm.go:310] [bootstrap-token] Using token: 7by4e8.7cs25dkxb8txjdft
	I1001 19:20:31.551753   31154 out.go:235]   - Configuring RBAC rules ...
	I1001 19:20:31.551859   31154 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 19:20:31.551994   31154 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 19:20:31.552131   31154 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 19:20:31.552254   31154 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 19:20:31.552369   31154 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 19:20:31.552467   31154 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 19:20:31.552576   31154 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 19:20:31.552620   31154 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 19:20:31.552661   31154 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 19:20:31.552670   31154 kubeadm.go:310] 
	I1001 19:20:31.552724   31154 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 19:20:31.552736   31154 kubeadm.go:310] 
	I1001 19:20:31.552812   31154 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 19:20:31.552820   31154 kubeadm.go:310] 
	I1001 19:20:31.552841   31154 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 19:20:31.552936   31154 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 19:20:31.553000   31154 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 19:20:31.553018   31154 kubeadm.go:310] 
	I1001 19:20:31.553076   31154 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 19:20:31.553082   31154 kubeadm.go:310] 
	I1001 19:20:31.553119   31154 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 19:20:31.553125   31154 kubeadm.go:310] 
	I1001 19:20:31.553165   31154 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 19:20:31.553231   31154 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 19:20:31.553309   31154 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 19:20:31.553319   31154 kubeadm.go:310] 
	I1001 19:20:31.553382   31154 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 19:20:31.553446   31154 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 19:20:31.553452   31154 kubeadm.go:310] 
	I1001 19:20:31.553515   31154 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7by4e8.7cs25dkxb8txjdft \
	I1001 19:20:31.553595   31154 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 \
	I1001 19:20:31.553612   31154 kubeadm.go:310] 	--control-plane 
	I1001 19:20:31.553616   31154 kubeadm.go:310] 
	I1001 19:20:31.553679   31154 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 19:20:31.553686   31154 kubeadm.go:310] 
	I1001 19:20:31.553757   31154 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7by4e8.7cs25dkxb8txjdft \
	I1001 19:20:31.553878   31154 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 
	I1001 19:20:31.553899   31154 cni.go:84] Creating CNI manager for ""
	I1001 19:20:31.553906   31154 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1001 19:20:31.555354   31154 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1001 19:20:31.556734   31154 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1001 19:20:31.562528   31154 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1001 19:20:31.562546   31154 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1001 19:20:31.584306   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1001 19:20:31.963746   31154 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 19:20:31.963826   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:31.963839   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-193737 minikube.k8s.io/updated_at=2024_10_01T19_20_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=ha-193737 minikube.k8s.io/primary=true
	I1001 19:20:32.001753   31154 ops.go:34] apiserver oom_adj: -16
	I1001 19:20:32.132202   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:32.632805   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:33.133195   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:33.633216   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:34.132915   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:34.632316   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:35.132491   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:35.632537   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:36.132620   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:20:36.218756   31154 kubeadm.go:1113] duration metric: took 4.255002576s to wait for elevateKubeSystemPrivileges
	I1001 19:20:36.218788   31154 kubeadm.go:394] duration metric: took 16.368111595s to StartCluster
	I1001 19:20:36.218804   31154 settings.go:142] acquiring lock: {Name:mkeb3fe64ed992373f048aef24eb0d675bcab60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:36.218873   31154 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:20:36.219494   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/kubeconfig: {Name:mk7ef19d546928001abf2478a3abe1d17765a591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:20:36.219713   31154 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:20:36.219727   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 19:20:36.219734   31154 start.go:241] waiting for startup goroutines ...
	I1001 19:20:36.219741   31154 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 19:20:36.219834   31154 addons.go:69] Setting storage-provisioner=true in profile "ha-193737"
	I1001 19:20:36.219856   31154 addons.go:234] Setting addon storage-provisioner=true in "ha-193737"
	I1001 19:20:36.219869   31154 addons.go:69] Setting default-storageclass=true in profile "ha-193737"
	I1001 19:20:36.219886   31154 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-193737"
	I1001 19:20:36.219893   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:20:36.219970   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:20:36.220394   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:36.220428   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:36.220398   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:36.220520   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:36.237915   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41109
	I1001 19:20:36.238065   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40357
	I1001 19:20:36.238375   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:36.238551   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:36.238872   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:36.238891   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:36.239076   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:36.239108   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:36.239214   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:36.239454   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:36.239611   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:20:36.239781   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:36.239809   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:36.241737   31154 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:20:36.241972   31154 kapi.go:59] client config for ha-193737: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key", CAFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 19:20:36.242414   31154 cert_rotation.go:140] Starting client certificate rotation controller
	I1001 19:20:36.242541   31154 addons.go:234] Setting addon default-storageclass=true in "ha-193737"
	I1001 19:20:36.242580   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:20:36.242883   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:36.242931   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:36.258780   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39715
	I1001 19:20:36.259292   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:36.259824   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:36.259850   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:36.260262   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:36.260587   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:20:36.262369   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37495
	I1001 19:20:36.262435   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:36.263083   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:36.263600   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:36.263628   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:36.264019   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:36.264582   31154 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 19:20:36.264749   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:36.264788   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:36.265963   31154 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 19:20:36.265987   31154 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 19:20:36.266008   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:36.270544   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:36.271199   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:36.271222   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:36.271425   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:36.271642   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:36.271818   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:36.272058   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:36.283812   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42859
	I1001 19:20:36.284387   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:36.284896   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:36.284913   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:36.285508   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:36.285834   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:20:36.288106   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:20:36.288393   31154 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 19:20:36.288414   31154 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 19:20:36.288437   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:20:36.291938   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:36.292436   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:20:36.292463   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:20:36.292681   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:20:36.292858   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:20:36.293020   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:20:36.293164   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:20:36.379914   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 19:20:36.401549   31154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 19:20:36.450371   31154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 19:20:36.756603   31154 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1001 19:20:37.190467   31154 main.go:141] libmachine: Making call to close driver server
	I1001 19:20:37.190501   31154 main.go:141] libmachine: (ha-193737) Calling .Close
	I1001 19:20:37.190537   31154 main.go:141] libmachine: Making call to close driver server
	I1001 19:20:37.190556   31154 main.go:141] libmachine: (ha-193737) Calling .Close
	I1001 19:20:37.190812   31154 main.go:141] libmachine: Successfully made call to close driver server
	I1001 19:20:37.190821   31154 main.go:141] libmachine: Successfully made call to close driver server
	I1001 19:20:37.190830   31154 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 19:20:37.190833   31154 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 19:20:37.190839   31154 main.go:141] libmachine: Making call to close driver server
	I1001 19:20:37.190841   31154 main.go:141] libmachine: Making call to close driver server
	I1001 19:20:37.190847   31154 main.go:141] libmachine: (ha-193737) Calling .Close
	I1001 19:20:37.190848   31154 main.go:141] libmachine: (ha-193737) Calling .Close
	I1001 19:20:37.191111   31154 main.go:141] libmachine: Successfully made call to close driver server
	I1001 19:20:37.191115   31154 main.go:141] libmachine: Successfully made call to close driver server
	I1001 19:20:37.191125   31154 main.go:141] libmachine: (ha-193737) DBG | Closing plugin on server side
	I1001 19:20:37.191134   31154 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 19:20:37.191134   31154 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 19:20:37.191205   31154 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1001 19:20:37.191222   31154 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1001 19:20:37.191338   31154 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1001 19:20:37.191344   31154 round_trippers.go:469] Request Headers:
	I1001 19:20:37.191354   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:20:37.191358   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:20:37.219411   31154 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I1001 19:20:37.219983   31154 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1001 19:20:37.219997   31154 round_trippers.go:469] Request Headers:
	I1001 19:20:37.220005   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:20:37.220008   31154 round_trippers.go:473]     Content-Type: application/json
	I1001 19:20:37.220011   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:20:37.228402   31154 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1001 19:20:37.228596   31154 main.go:141] libmachine: Making call to close driver server
	I1001 19:20:37.228610   31154 main.go:141] libmachine: (ha-193737) Calling .Close
	I1001 19:20:37.228929   31154 main.go:141] libmachine: Successfully made call to close driver server
	I1001 19:20:37.228950   31154 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 19:20:37.228974   31154 main.go:141] libmachine: (ha-193737) DBG | Closing plugin on server side
	I1001 19:20:37.230600   31154 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1001 19:20:37.231770   31154 addons.go:510] duration metric: took 1.012023889s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1001 19:20:37.231812   31154 start.go:246] waiting for cluster config update ...
	I1001 19:20:37.231823   31154 start.go:255] writing updated cluster config ...
	I1001 19:20:37.233187   31154 out.go:201] 
	I1001 19:20:37.234563   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:20:37.234629   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:20:37.236253   31154 out.go:177] * Starting "ha-193737-m02" control-plane node in "ha-193737" cluster
	I1001 19:20:37.237974   31154 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:20:37.238007   31154 cache.go:56] Caching tarball of preloaded images
	I1001 19:20:37.238089   31154 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 19:20:37.238106   31154 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 19:20:37.238204   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:20:37.238426   31154 start.go:360] acquireMachinesLock for ha-193737-m02: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 19:20:37.238490   31154 start.go:364] duration metric: took 37.598µs to acquireMachinesLock for "ha-193737-m02"
	I1001 19:20:37.238511   31154 start.go:93] Provisioning new machine with config: &{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:20:37.238603   31154 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1001 19:20:37.240050   31154 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 19:20:37.240148   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:20:37.240181   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:20:37.256492   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43983
	I1001 19:20:37.257003   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:20:37.257628   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:20:37.257663   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:20:37.258069   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:20:37.258273   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetMachineName
	I1001 19:20:37.258413   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:37.258584   31154 start.go:159] libmachine.API.Create for "ha-193737" (driver="kvm2")
	I1001 19:20:37.258609   31154 client.go:168] LocalClient.Create starting
	I1001 19:20:37.258644   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem
	I1001 19:20:37.258691   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:20:37.258706   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:20:37.258752   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem
	I1001 19:20:37.258775   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:20:37.258791   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:20:37.258820   31154 main.go:141] libmachine: Running pre-create checks...
	I1001 19:20:37.258831   31154 main.go:141] libmachine: (ha-193737-m02) Calling .PreCreateCheck
	I1001 19:20:37.258981   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetConfigRaw
	I1001 19:20:37.259499   31154 main.go:141] libmachine: Creating machine...
	I1001 19:20:37.259521   31154 main.go:141] libmachine: (ha-193737-m02) Calling .Create
	I1001 19:20:37.259645   31154 main.go:141] libmachine: (ha-193737-m02) Creating KVM machine...
	I1001 19:20:37.261171   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found existing default KVM network
	I1001 19:20:37.261376   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found existing private KVM network mk-ha-193737
	I1001 19:20:37.261582   31154 main.go:141] libmachine: (ha-193737-m02) Setting up store path in /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02 ...
	I1001 19:20:37.261615   31154 main.go:141] libmachine: (ha-193737-m02) Building disk image from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 19:20:37.261632   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:37.261518   31541 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:20:37.261750   31154 main.go:141] libmachine: (ha-193737-m02) Downloading /home/jenkins/minikube-integration/19736-11198/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 19:20:37.511803   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:37.511639   31541 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa...
	I1001 19:20:37.705703   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:37.705550   31541 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/ha-193737-m02.rawdisk...
	I1001 19:20:37.705738   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Writing magic tar header
	I1001 19:20:37.705753   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Writing SSH key tar header
	I1001 19:20:37.705765   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:37.705670   31541 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02 ...
	I1001 19:20:37.705777   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02 (perms=drwx------)
	I1001 19:20:37.705791   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines (perms=drwxr-xr-x)
	I1001 19:20:37.705802   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02
	I1001 19:20:37.705808   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube (perms=drwxr-xr-x)
	I1001 19:20:37.705819   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198 (perms=drwxrwxr-x)
	I1001 19:20:37.705827   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 19:20:37.705840   31154 main.go:141] libmachine: (ha-193737-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 19:20:37.705857   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines
	I1001 19:20:37.705865   31154 main.go:141] libmachine: (ha-193737-m02) Creating domain...
	I1001 19:20:37.705882   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:20:37.705895   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198
	I1001 19:20:37.705908   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 19:20:37.705917   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home/jenkins
	I1001 19:20:37.705926   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Checking permissions on dir: /home
	I1001 19:20:37.705934   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Skipping /home - not owner
	I1001 19:20:37.706847   31154 main.go:141] libmachine: (ha-193737-m02) define libvirt domain using xml: 
	I1001 19:20:37.706866   31154 main.go:141] libmachine: (ha-193737-m02) <domain type='kvm'>
	I1001 19:20:37.706875   31154 main.go:141] libmachine: (ha-193737-m02)   <name>ha-193737-m02</name>
	I1001 19:20:37.706882   31154 main.go:141] libmachine: (ha-193737-m02)   <memory unit='MiB'>2200</memory>
	I1001 19:20:37.706889   31154 main.go:141] libmachine: (ha-193737-m02)   <vcpu>2</vcpu>
	I1001 19:20:37.706899   31154 main.go:141] libmachine: (ha-193737-m02)   <features>
	I1001 19:20:37.706907   31154 main.go:141] libmachine: (ha-193737-m02)     <acpi/>
	I1001 19:20:37.706913   31154 main.go:141] libmachine: (ha-193737-m02)     <apic/>
	I1001 19:20:37.706921   31154 main.go:141] libmachine: (ha-193737-m02)     <pae/>
	I1001 19:20:37.706927   31154 main.go:141] libmachine: (ha-193737-m02)     
	I1001 19:20:37.706935   31154 main.go:141] libmachine: (ha-193737-m02)   </features>
	I1001 19:20:37.706943   31154 main.go:141] libmachine: (ha-193737-m02)   <cpu mode='host-passthrough'>
	I1001 19:20:37.706947   31154 main.go:141] libmachine: (ha-193737-m02)   
	I1001 19:20:37.706951   31154 main.go:141] libmachine: (ha-193737-m02)   </cpu>
	I1001 19:20:37.706958   31154 main.go:141] libmachine: (ha-193737-m02)   <os>
	I1001 19:20:37.706963   31154 main.go:141] libmachine: (ha-193737-m02)     <type>hvm</type>
	I1001 19:20:37.706969   31154 main.go:141] libmachine: (ha-193737-m02)     <boot dev='cdrom'/>
	I1001 19:20:37.706979   31154 main.go:141] libmachine: (ha-193737-m02)     <boot dev='hd'/>
	I1001 19:20:37.706999   31154 main.go:141] libmachine: (ha-193737-m02)     <bootmenu enable='no'/>
	I1001 19:20:37.707014   31154 main.go:141] libmachine: (ha-193737-m02)   </os>
	I1001 19:20:37.707026   31154 main.go:141] libmachine: (ha-193737-m02)   <devices>
	I1001 19:20:37.707037   31154 main.go:141] libmachine: (ha-193737-m02)     <disk type='file' device='cdrom'>
	I1001 19:20:37.707052   31154 main.go:141] libmachine: (ha-193737-m02)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/boot2docker.iso'/>
	I1001 19:20:37.707067   31154 main.go:141] libmachine: (ha-193737-m02)       <target dev='hdc' bus='scsi'/>
	I1001 19:20:37.707078   31154 main.go:141] libmachine: (ha-193737-m02)       <readonly/>
	I1001 19:20:37.707090   31154 main.go:141] libmachine: (ha-193737-m02)     </disk>
	I1001 19:20:37.707105   31154 main.go:141] libmachine: (ha-193737-m02)     <disk type='file' device='disk'>
	I1001 19:20:37.707118   31154 main.go:141] libmachine: (ha-193737-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 19:20:37.707132   31154 main.go:141] libmachine: (ha-193737-m02)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/ha-193737-m02.rawdisk'/>
	I1001 19:20:37.707142   31154 main.go:141] libmachine: (ha-193737-m02)       <target dev='hda' bus='virtio'/>
	I1001 19:20:37.707150   31154 main.go:141] libmachine: (ha-193737-m02)     </disk>
	I1001 19:20:37.707164   31154 main.go:141] libmachine: (ha-193737-m02)     <interface type='network'>
	I1001 19:20:37.707176   31154 main.go:141] libmachine: (ha-193737-m02)       <source network='mk-ha-193737'/>
	I1001 19:20:37.707186   31154 main.go:141] libmachine: (ha-193737-m02)       <model type='virtio'/>
	I1001 19:20:37.707196   31154 main.go:141] libmachine: (ha-193737-m02)     </interface>
	I1001 19:20:37.707206   31154 main.go:141] libmachine: (ha-193737-m02)     <interface type='network'>
	I1001 19:20:37.707217   31154 main.go:141] libmachine: (ha-193737-m02)       <source network='default'/>
	I1001 19:20:37.707227   31154 main.go:141] libmachine: (ha-193737-m02)       <model type='virtio'/>
	I1001 19:20:37.707241   31154 main.go:141] libmachine: (ha-193737-m02)     </interface>
	I1001 19:20:37.707259   31154 main.go:141] libmachine: (ha-193737-m02)     <serial type='pty'>
	I1001 19:20:37.707267   31154 main.go:141] libmachine: (ha-193737-m02)       <target port='0'/>
	I1001 19:20:37.707272   31154 main.go:141] libmachine: (ha-193737-m02)     </serial>
	I1001 19:20:37.707279   31154 main.go:141] libmachine: (ha-193737-m02)     <console type='pty'>
	I1001 19:20:37.707283   31154 main.go:141] libmachine: (ha-193737-m02)       <target type='serial' port='0'/>
	I1001 19:20:37.707290   31154 main.go:141] libmachine: (ha-193737-m02)     </console>
	I1001 19:20:37.707295   31154 main.go:141] libmachine: (ha-193737-m02)     <rng model='virtio'>
	I1001 19:20:37.707303   31154 main.go:141] libmachine: (ha-193737-m02)       <backend model='random'>/dev/random</backend>
	I1001 19:20:37.707306   31154 main.go:141] libmachine: (ha-193737-m02)     </rng>
	I1001 19:20:37.707313   31154 main.go:141] libmachine: (ha-193737-m02)     
	I1001 19:20:37.707317   31154 main.go:141] libmachine: (ha-193737-m02)     
	I1001 19:20:37.707323   31154 main.go:141] libmachine: (ha-193737-m02)   </devices>
	I1001 19:20:37.707331   31154 main.go:141] libmachine: (ha-193737-m02) </domain>
	I1001 19:20:37.707362   31154 main.go:141] libmachine: (ha-193737-m02) 
	I1001 19:20:37.714050   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:2e:69:af in network default
	I1001 19:20:37.714587   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:37.714605   31154 main.go:141] libmachine: (ha-193737-m02) Ensuring networks are active...
	I1001 19:20:37.715386   31154 main.go:141] libmachine: (ha-193737-m02) Ensuring network default is active
	I1001 19:20:37.715688   31154 main.go:141] libmachine: (ha-193737-m02) Ensuring network mk-ha-193737 is active
	I1001 19:20:37.716026   31154 main.go:141] libmachine: (ha-193737-m02) Getting domain xml...
	I1001 19:20:37.716683   31154 main.go:141] libmachine: (ha-193737-m02) Creating domain...
	I1001 19:20:38.946823   31154 main.go:141] libmachine: (ha-193737-m02) Waiting to get IP...
	I1001 19:20:38.947612   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:38.948069   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:38.948111   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:38.948057   31541 retry.go:31] will retry after 211.487702ms: waiting for machine to come up
	I1001 19:20:39.161472   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:39.161945   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:39.161981   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:39.161920   31541 retry.go:31] will retry after 369.29813ms: waiting for machine to come up
	I1001 19:20:39.532486   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:39.533006   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:39.533034   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:39.532951   31541 retry.go:31] will retry after 340.79833ms: waiting for machine to come up
	I1001 19:20:39.875453   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:39.875902   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:39.875928   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:39.875855   31541 retry.go:31] will retry after 558.36179ms: waiting for machine to come up
	I1001 19:20:40.435617   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:40.436128   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:40.436156   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:40.436070   31541 retry.go:31] will retry after 724.412456ms: waiting for machine to come up
	I1001 19:20:41.161753   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:41.162215   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:41.162238   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:41.162183   31541 retry.go:31] will retry after 921.122771ms: waiting for machine to come up
	I1001 19:20:42.085509   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:42.085978   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:42.086002   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:42.085932   31541 retry.go:31] will retry after 886.914683ms: waiting for machine to come up
	I1001 19:20:42.974460   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:42.974900   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:42.974926   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:42.974856   31541 retry.go:31] will retry after 1.455695023s: waiting for machine to come up
	I1001 19:20:44.432773   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:44.433336   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:44.433365   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:44.433292   31541 retry.go:31] will retry after 1.415796379s: waiting for machine to come up
	I1001 19:20:45.850938   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:45.851337   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:45.851357   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:45.851309   31541 retry.go:31] will retry after 1.972979972s: waiting for machine to come up
	I1001 19:20:47.825356   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:47.825785   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:47.825812   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:47.825732   31541 retry.go:31] will retry after 1.92262401s: waiting for machine to come up
	I1001 19:20:49.750763   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:49.751160   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:49.751177   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:49.751137   31541 retry.go:31] will retry after 3.587777506s: waiting for machine to come up
	I1001 19:20:53.340173   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:53.340566   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find current IP address of domain ha-193737-m02 in network mk-ha-193737
	I1001 19:20:53.340617   31154 main.go:141] libmachine: (ha-193737-m02) DBG | I1001 19:20:53.340558   31541 retry.go:31] will retry after 3.748563727s: waiting for machine to come up
	I1001 19:20:57.093502   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.094007   31154 main.go:141] libmachine: (ha-193737-m02) Found IP for machine: 192.168.39.27
	I1001 19:20:57.094023   31154 main.go:141] libmachine: (ha-193737-m02) Reserving static IP address...
	I1001 19:20:57.094037   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has current primary IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.094391   31154 main.go:141] libmachine: (ha-193737-m02) DBG | unable to find host DHCP lease matching {name: "ha-193737-m02", mac: "52:54:00:7b:e4:d4", ip: "192.168.39.27"} in network mk-ha-193737
	I1001 19:20:57.171234   31154 main.go:141] libmachine: (ha-193737-m02) Reserved static IP address: 192.168.39.27
	I1001 19:20:57.171257   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Getting to WaitForSSH function...
	I1001 19:20:57.171265   31154 main.go:141] libmachine: (ha-193737-m02) Waiting for SSH to be available...
	I1001 19:20:57.173965   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.174561   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.174594   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.174717   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Using SSH client type: external
	I1001 19:20:57.174748   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa (-rw-------)
	I1001 19:20:57.174779   31154 main.go:141] libmachine: (ha-193737-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.27 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 19:20:57.174794   31154 main.go:141] libmachine: (ha-193737-m02) DBG | About to run SSH command:
	I1001 19:20:57.174810   31154 main.go:141] libmachine: (ha-193737-m02) DBG | exit 0
	I1001 19:20:57.304572   31154 main.go:141] libmachine: (ha-193737-m02) DBG | SSH cmd err, output: <nil>: 
	I1001 19:20:57.304868   31154 main.go:141] libmachine: (ha-193737-m02) KVM machine creation complete!
	I1001 19:20:57.305162   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetConfigRaw
	I1001 19:20:57.305752   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:57.305953   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:57.306163   31154 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 19:20:57.306232   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetState
	I1001 19:20:57.307715   31154 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 19:20:57.307729   31154 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 19:20:57.307736   31154 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 19:20:57.307743   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:57.310409   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.310801   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.310826   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.310956   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:57.311136   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.311267   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.311408   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:57.311603   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:57.311799   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:57.311811   31154 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 19:20:57.423687   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:20:57.423716   31154 main.go:141] libmachine: Detecting the provisioner...
	I1001 19:20:57.423741   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:57.426918   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.427323   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.427358   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.427583   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:57.427788   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.428027   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.428201   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:57.428392   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:57.428632   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:57.428762   31154 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 19:20:57.541173   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 19:20:57.541232   31154 main.go:141] libmachine: found compatible host: buildroot
	I1001 19:20:57.541238   31154 main.go:141] libmachine: Provisioning with buildroot...
	I1001 19:20:57.541245   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetMachineName
	I1001 19:20:57.541504   31154 buildroot.go:166] provisioning hostname "ha-193737-m02"
	I1001 19:20:57.541527   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetMachineName
	I1001 19:20:57.541689   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:57.544406   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.544791   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.544830   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.544962   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:57.545135   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.545283   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.545382   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:57.545543   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:57.545753   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:57.545769   31154 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-193737-m02 && echo "ha-193737-m02" | sudo tee /etc/hostname
	I1001 19:20:57.675116   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-193737-m02
	
	I1001 19:20:57.675147   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:57.678239   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.678600   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.678624   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.678822   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:57.679011   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.679146   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:57.679254   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:57.679397   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:57.679573   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:57.679599   31154 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-193737-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-193737-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-193737-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 19:20:57.800899   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:20:57.800928   31154 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 19:20:57.800946   31154 buildroot.go:174] setting up certificates
	I1001 19:20:57.800957   31154 provision.go:84] configureAuth start
	I1001 19:20:57.800969   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetMachineName
	I1001 19:20:57.801194   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetIP
	I1001 19:20:57.803613   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.803954   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.803982   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.804134   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:57.806340   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.806657   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:57.806678   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:57.806860   31154 provision.go:143] copyHostCerts
	I1001 19:20:57.806892   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:20:57.806929   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 19:20:57.806937   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:20:57.807013   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 19:20:57.807084   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:20:57.807101   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 19:20:57.807107   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:20:57.807131   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 19:20:57.807178   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:20:57.807196   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 19:20:57.807202   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:20:57.807221   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 19:20:57.807269   31154 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.ha-193737-m02 san=[127.0.0.1 192.168.39.27 ha-193737-m02 localhost minikube]
	I1001 19:20:58.056549   31154 provision.go:177] copyRemoteCerts
	I1001 19:20:58.056608   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 19:20:58.056631   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:58.059291   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.059620   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.059653   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.059823   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.060033   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.060174   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.060291   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa Username:docker}
	I1001 19:20:58.146502   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 19:20:58.146577   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 19:20:58.170146   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 19:20:58.170211   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 19:20:58.193090   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 19:20:58.193172   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 19:20:58.215033   31154 provision.go:87] duration metric: took 414.061487ms to configureAuth
	I1001 19:20:58.215067   31154 buildroot.go:189] setting minikube options for container-runtime
	I1001 19:20:58.215250   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:20:58.215327   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:58.218149   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.218497   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.218527   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.218653   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.218868   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.219033   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.219156   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.219300   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:58.219460   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:58.219473   31154 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 19:20:58.470145   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 19:20:58.470178   31154 main.go:141] libmachine: Checking connection to Docker...
	I1001 19:20:58.470189   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetURL
	I1001 19:20:58.471402   31154 main.go:141] libmachine: (ha-193737-m02) DBG | Using libvirt version 6000000
	I1001 19:20:58.474024   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.474371   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.474412   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.474613   31154 main.go:141] libmachine: Docker is up and running!
	I1001 19:20:58.474631   31154 main.go:141] libmachine: Reticulating splines...
	I1001 19:20:58.474639   31154 client.go:171] duration metric: took 21.216022282s to LocalClient.Create
	I1001 19:20:58.474664   31154 start.go:167] duration metric: took 21.216081227s to libmachine.API.Create "ha-193737"
	I1001 19:20:58.474674   31154 start.go:293] postStartSetup for "ha-193737-m02" (driver="kvm2")
	I1001 19:20:58.474687   31154 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 19:20:58.474711   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:58.475026   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 19:20:58.475056   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:58.477612   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.478051   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.478084   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.478170   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.478359   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.478475   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.478613   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa Username:docker}
	I1001 19:20:58.566449   31154 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 19:20:58.570622   31154 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 19:20:58.570648   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 19:20:58.570715   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 19:20:58.570786   31154 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 19:20:58.570798   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /etc/ssl/certs/184302.pem
	I1001 19:20:58.570944   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 19:20:58.579535   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:20:58.601457   31154 start.go:296] duration metric: took 126.771104ms for postStartSetup
	I1001 19:20:58.601513   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetConfigRaw
	I1001 19:20:58.602068   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetIP
	I1001 19:20:58.604495   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.604874   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.604900   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.605223   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:20:58.605434   31154 start.go:128] duration metric: took 21.366818669s to createHost
	I1001 19:20:58.605467   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:58.607650   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.608026   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.608051   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.608184   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.608337   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.608453   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.608557   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.608693   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:20:58.608837   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1001 19:20:58.608847   31154 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 19:20:58.721980   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727810458.681508368
	
	I1001 19:20:58.722008   31154 fix.go:216] guest clock: 1727810458.681508368
	I1001 19:20:58.722018   31154 fix.go:229] Guest: 2024-10-01 19:20:58.681508368 +0000 UTC Remote: 2024-10-01 19:20:58.605448095 +0000 UTC m=+70.833286913 (delta=76.060273ms)
	I1001 19:20:58.722040   31154 fix.go:200] guest clock delta is within tolerance: 76.060273ms
	I1001 19:20:58.722049   31154 start.go:83] releasing machines lock for "ha-193737-m02", held for 21.483548504s
	I1001 19:20:58.722074   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:58.722316   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetIP
	I1001 19:20:58.725092   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.725406   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.725439   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.727497   31154 out.go:177] * Found network options:
	I1001 19:20:58.728546   31154 out.go:177]   - NO_PROXY=192.168.39.14
	W1001 19:20:58.729434   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 19:20:58.729479   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:58.729929   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:58.730082   31154 main.go:141] libmachine: (ha-193737-m02) Calling .DriverName
	I1001 19:20:58.730149   31154 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 19:20:58.730189   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	W1001 19:20:58.730253   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 19:20:58.730326   31154 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 19:20:58.730347   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHHostname
	I1001 19:20:58.732847   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.732897   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.733209   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.733238   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.733263   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:20:58.733277   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:20:58.733405   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.733481   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHPort
	I1001 19:20:58.733618   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.733656   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHKeyPath
	I1001 19:20:58.733727   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.733802   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetSSHUsername
	I1001 19:20:58.733822   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa Username:docker}
	I1001 19:20:58.733934   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m02/id_rsa Username:docker}
	I1001 19:20:58.972871   31154 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 19:20:58.978194   31154 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 19:20:58.978260   31154 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 19:20:58.994663   31154 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 19:20:58.994684   31154 start.go:495] detecting cgroup driver to use...
	I1001 19:20:58.994738   31154 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 19:20:59.011009   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 19:20:59.025521   31154 docker.go:217] disabling cri-docker service (if available) ...
	I1001 19:20:59.025608   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 19:20:59.039348   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 19:20:59.052807   31154 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 19:20:59.169289   31154 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 19:20:59.334757   31154 docker.go:233] disabling docker service ...
	I1001 19:20:59.334834   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 19:20:59.348035   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 19:20:59.360660   31154 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 19:20:59.486509   31154 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 19:20:59.604588   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 19:20:59.617998   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 19:20:59.635554   31154 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 19:20:59.635626   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.645574   31154 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 19:20:59.645648   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.655487   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.665223   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.674970   31154 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 19:20:59.684872   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.694696   31154 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.710618   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:20:59.721089   31154 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 19:20:59.731283   31154 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 19:20:59.731352   31154 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 19:20:59.746274   31154 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 19:20:59.756184   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:20:59.870307   31154 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 19:20:59.956939   31154 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 19:20:59.957022   31154 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 19:20:59.961766   31154 start.go:563] Will wait 60s for crictl version
	I1001 19:20:59.961831   31154 ssh_runner.go:195] Run: which crictl
	I1001 19:20:59.965776   31154 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 19:21:00.010361   31154 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 19:21:00.010446   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:21:00.041083   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:21:00.075668   31154 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 19:21:00.077105   31154 out.go:177]   - env NO_PROXY=192.168.39.14
	I1001 19:21:00.078374   31154 main.go:141] libmachine: (ha-193737-m02) Calling .GetIP
	I1001 19:21:00.081375   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:21:00.081679   31154 main.go:141] libmachine: (ha-193737-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e4:d4", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:51 +0000 UTC Type:0 Mac:52:54:00:7b:e4:d4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-193737-m02 Clientid:01:52:54:00:7b:e4:d4}
	I1001 19:21:00.081711   31154 main.go:141] libmachine: (ha-193737-m02) DBG | domain ha-193737-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:7b:e4:d4 in network mk-ha-193737
	I1001 19:21:00.081983   31154 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 19:21:00.086306   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:21:00.099180   31154 mustload.go:65] Loading cluster: ha-193737
	I1001 19:21:00.099450   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:21:00.099790   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:21:00.099833   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:21:00.115527   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43263
	I1001 19:21:00.116081   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:21:00.116546   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:21:00.116565   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:21:00.116887   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:21:00.117121   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:21:00.118679   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:21:00.118968   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:21:00.119005   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:21:00.133660   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37793
	I1001 19:21:00.134171   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:21:00.134638   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:21:00.134657   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:21:00.134945   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:21:00.135112   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:21:00.135251   31154 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737 for IP: 192.168.39.27
	I1001 19:21:00.135263   31154 certs.go:194] generating shared ca certs ...
	I1001 19:21:00.135281   31154 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:21:00.135407   31154 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 19:21:00.135448   31154 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 19:21:00.135454   31154 certs.go:256] generating profile certs ...
	I1001 19:21:00.135523   31154 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key
	I1001 19:21:00.135547   31154 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.b6f75b80
	I1001 19:21:00.135561   31154 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.b6f75b80 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.14 192.168.39.27 192.168.39.254]
	I1001 19:21:00.686434   31154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.b6f75b80 ...
	I1001 19:21:00.686467   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.b6f75b80: {Name:mkeb01bd9448160d7d89858bc8ed1c53818e2061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:21:00.686650   31154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.b6f75b80 ...
	I1001 19:21:00.686663   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.b6f75b80: {Name:mk3a8c2ce4c29185d261167caf7207467c082c1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:21:00.686733   31154 certs.go:381] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.b6f75b80 -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt
	I1001 19:21:00.686905   31154 certs.go:385] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.b6f75b80 -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key
	I1001 19:21:00.687041   31154 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key
	I1001 19:21:00.687055   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 19:21:00.687068   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 19:21:00.687080   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 19:21:00.687093   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 19:21:00.687105   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 19:21:00.687117   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 19:21:00.687128   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 19:21:00.687140   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 19:21:00.687188   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 19:21:00.687218   31154 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 19:21:00.687227   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 19:21:00.687249   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 19:21:00.687269   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 19:21:00.687290   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 19:21:00.687321   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:21:00.687345   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:21:00.687358   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem -> /usr/share/ca-certificates/18430.pem
	I1001 19:21:00.687370   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /usr/share/ca-certificates/184302.pem
	I1001 19:21:00.687398   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:21:00.690221   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:21:00.690721   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:21:00.690750   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:21:00.690891   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:21:00.691103   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:21:00.691297   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:21:00.691469   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:21:00.764849   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1001 19:21:00.770067   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1001 19:21:00.781099   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1001 19:21:00.785191   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1001 19:21:00.796213   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1001 19:21:00.800405   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1001 19:21:00.810899   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1001 19:21:00.815556   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1001 19:21:00.825792   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1001 19:21:00.830049   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1001 19:21:00.841022   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1001 19:21:00.845622   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1001 19:21:00.857011   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 19:21:00.881387   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 19:21:00.905420   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 19:21:00.930584   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 19:21:00.957479   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1001 19:21:00.982115   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 19:21:01.005996   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 19:21:01.031948   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 19:21:01.059129   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 19:21:01.084143   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 19:21:01.109909   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 19:21:01.133720   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1001 19:21:01.150500   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1001 19:21:01.168599   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1001 19:21:01.185368   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1001 19:21:01.202279   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1001 19:21:01.218930   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1001 19:21:01.235286   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1001 19:21:01.251963   31154 ssh_runner.go:195] Run: openssl version
	I1001 19:21:01.257542   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 19:21:01.268254   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:21:01.272732   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:21:01.272802   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:21:01.278777   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 19:21:01.290880   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 19:21:01.301840   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 19:21:01.306397   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 19:21:01.306469   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 19:21:01.312313   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 19:21:01.322717   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 19:21:01.333015   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 19:21:01.337340   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 19:21:01.337400   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 19:21:01.343033   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 19:21:01.354495   31154 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 19:21:01.358223   31154 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 19:21:01.358275   31154 kubeadm.go:934] updating node {m02 192.168.39.27 8443 v1.31.1 crio true true} ...
	I1001 19:21:01.358349   31154 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-193737-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 19:21:01.358373   31154 kube-vip.go:115] generating kube-vip config ...
	I1001 19:21:01.358405   31154 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 19:21:01.374873   31154 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 19:21:01.374943   31154 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1001 19:21:01.374989   31154 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 19:21:01.384444   31154 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1001 19:21:01.384518   31154 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1001 19:21:01.394161   31154 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1001 19:21:01.394190   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 19:21:01.394191   31154 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1001 19:21:01.394256   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 19:21:01.394189   31154 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1001 19:21:01.398439   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1001 19:21:01.398487   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1001 19:21:02.673266   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 19:21:02.673366   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 19:21:02.678383   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1001 19:21:02.678421   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1001 19:21:02.683681   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 19:21:02.723149   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 19:21:02.723251   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 19:21:02.737865   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1001 19:21:02.737908   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1001 19:21:03.230970   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1001 19:21:03.240943   31154 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1001 19:21:03.257655   31154 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 19:21:03.274741   31154 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1001 19:21:03.291537   31154 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 19:21:03.295338   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:21:03.307165   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:21:03.463069   31154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:21:03.480147   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:21:03.480689   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:21:03.480744   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:21:03.495841   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39579
	I1001 19:21:03.496320   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:21:03.496880   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:21:03.496904   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:21:03.497248   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:21:03.497421   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:21:03.497546   31154 start.go:317] joinCluster: &{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:21:03.497680   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1001 19:21:03.497702   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:21:03.500751   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:21:03.501276   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:21:03.501306   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:21:03.501495   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:21:03.501701   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:21:03.501893   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:21:03.502064   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:21:03.648333   31154 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:21:03.648405   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n692vg.wpdyj1cg443tmqgp --discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-193737-m02 --control-plane --apiserver-advertise-address=192.168.39.27 --apiserver-bind-port=8443"
	I1001 19:21:25.467048   31154 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n692vg.wpdyj1cg443tmqgp --discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-193737-m02 --control-plane --apiserver-advertise-address=192.168.39.27 --apiserver-bind-port=8443": (21.818614216s)
	I1001 19:21:25.467085   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1001 19:21:26.061914   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-193737-m02 minikube.k8s.io/updated_at=2024_10_01T19_21_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=ha-193737 minikube.k8s.io/primary=false
	I1001 19:21:26.203974   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-193737-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1001 19:21:26.315094   31154 start.go:319] duration metric: took 22.817544624s to joinCluster
	I1001 19:21:26.315164   31154 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:21:26.315617   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:21:26.316452   31154 out.go:177] * Verifying Kubernetes components...
	I1001 19:21:26.317646   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:21:26.611377   31154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:21:26.640565   31154 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:21:26.640891   31154 kapi.go:59] client config for ha-193737: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key", CAFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1001 19:21:26.640968   31154 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.14:8443
	I1001 19:21:26.641227   31154 node_ready.go:35] waiting up to 6m0s for node "ha-193737-m02" to be "Ready" ...
	I1001 19:21:26.641356   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:26.641366   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:26.641375   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:26.641380   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:26.653154   31154 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1001 19:21:27.141735   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:27.141756   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:27.141764   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:27.141768   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:27.148495   31154 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1001 19:21:27.641626   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:27.641661   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:27.641672   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:27.641677   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:27.646178   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:28.142172   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:28.142200   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:28.142210   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:28.142216   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:28.146315   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:28.641888   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:28.641917   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:28.641931   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:28.641940   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:28.645578   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:28.646211   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:29.141557   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:29.141582   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:29.141592   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:29.141597   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:29.146956   31154 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1001 19:21:29.641796   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:29.641817   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:29.641824   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:29.641829   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:29.645155   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:30.142079   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:30.142103   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:30.142114   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:30.142119   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:30.145277   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:30.642189   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:30.642209   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:30.642217   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:30.642220   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:30.646863   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:30.647494   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:31.141763   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:31.141784   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:31.141796   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:31.141801   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:31.145813   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:31.641815   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:31.641836   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:31.641847   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:31.641853   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:31.645200   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:32.141448   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:32.141473   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:32.141486   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:32.141493   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:32.145295   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:32.641622   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:32.641643   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:32.641649   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:32.641653   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:32.645174   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:33.141797   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:33.141818   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:33.141826   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:33.141830   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:33.145091   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:33.145688   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:33.641422   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:33.641445   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:33.641454   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:33.641464   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:33.644675   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:34.141560   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:34.141589   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:34.141601   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:34.141607   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:34.145278   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:34.641659   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:34.641678   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:34.641686   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:34.641691   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:34.644811   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:35.142049   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:35.142075   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:35.142083   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:35.142087   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:35.145002   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:35.641531   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:35.641559   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:35.641573   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:35.641586   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:35.644829   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:35.645348   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:36.141635   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:36.141655   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:36.141663   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:36.141668   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:36.144536   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:36.642098   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:36.642119   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:36.642127   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:36.642130   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:36.645313   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:37.142420   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:37.142468   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:37.142477   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:37.142481   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:37.145780   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:37.641627   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:37.641647   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:37.641655   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:37.641659   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:37.644484   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:38.142220   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:38.142244   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:38.142255   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:38.142262   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:38.145466   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:38.146172   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:38.641992   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:38.642015   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:38.642024   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:38.642028   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:38.644515   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:39.141559   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:39.141585   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:39.141595   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:39.141601   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:39.145034   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:39.641804   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:39.641838   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:39.641845   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:39.641850   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:39.646296   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:40.142227   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:40.142248   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:40.142256   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:40.142260   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:40.145591   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:40.642234   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:40.642258   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:40.642267   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:40.642271   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:40.645384   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:40.646037   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:41.142410   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:41.142429   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:41.142437   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:41.142441   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:41.145729   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:41.642146   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:41.642167   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:41.642174   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:41.642178   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:41.645647   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:42.141537   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:42.141559   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:42.141569   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:42.141575   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:42.144817   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:42.642106   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:42.642127   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:42.642136   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:42.642141   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:42.645934   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:42.646419   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:43.141441   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:43.141464   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:43.141472   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:43.141476   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:43.144793   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:43.642316   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:43.642337   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:43.642345   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:43.642351   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:43.646007   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:44.142085   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:44.142106   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:44.142114   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:44.142117   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:44.145431   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:44.642346   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:44.642368   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:44.642376   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:44.642379   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:44.645860   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:45.142289   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:45.142312   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.142323   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.142330   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.145780   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:45.146379   31154 node_ready.go:53] node "ha-193737-m02" has status "Ready":"False"
	I1001 19:21:45.641699   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:45.641725   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.641733   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.641736   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.645813   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:45.646591   31154 node_ready.go:49] node "ha-193737-m02" has status "Ready":"True"
	I1001 19:21:45.646618   31154 node_ready.go:38] duration metric: took 19.005351721s for node "ha-193737-m02" to be "Ready" ...
	I1001 19:21:45.646627   31154 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 19:21:45.646691   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:21:45.646700   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.646707   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.646713   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.650655   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:45.657881   31154 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.657971   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hd5hv
	I1001 19:21:45.657980   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.657988   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.657993   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.660900   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.661620   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:45.661639   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.661649   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.661657   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.665733   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:45.666386   31154 pod_ready.go:93] pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:45.666409   31154 pod_ready.go:82] duration metric: took 8.499445ms for pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.666421   31154 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.666492   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-v2wsx
	I1001 19:21:45.666502   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.666512   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.666518   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.669133   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.669889   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:45.669907   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.669918   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.669923   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.672275   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.672755   31154 pod_ready.go:93] pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:45.672774   31154 pod_ready.go:82] duration metric: took 6.344856ms for pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.672786   31154 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.672846   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737
	I1001 19:21:45.672857   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.672867   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.672872   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.675287   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.675893   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:45.675911   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.675922   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.675930   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.678241   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.678741   31154 pod_ready.go:93] pod "etcd-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:45.678763   31154 pod_ready.go:82] duration metric: took 5.967949ms for pod "etcd-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.678772   31154 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.678833   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737-m02
	I1001 19:21:45.678850   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.678858   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.678871   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.681191   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.681800   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:45.681815   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.681825   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.681830   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.683889   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:45.684431   31154 pod_ready.go:93] pod "etcd-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:45.684453   31154 pod_ready.go:82] duration metric: took 5.673081ms for pod "etcd-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.684473   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:45.841835   31154 request.go:632] Waited for 157.291258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737
	I1001 19:21:45.841900   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737
	I1001 19:21:45.841906   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:45.841913   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:45.841919   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:45.845357   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.042508   31154 request.go:632] Waited for 196.405333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:46.042588   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:46.042599   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:46.042611   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:46.042619   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:46.046254   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.046866   31154 pod_ready.go:93] pod "kube-apiserver-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:46.046884   31154 pod_ready.go:82] duration metric: took 362.399581ms for pod "kube-apiserver-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:46.046893   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:46.242039   31154 request.go:632] Waited for 195.063872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m02
	I1001 19:21:46.242144   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m02
	I1001 19:21:46.242157   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:46.242168   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:46.242174   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:46.246032   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.441916   31154 request.go:632] Waited for 195.330252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:46.441997   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:46.442003   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:46.442011   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:46.442014   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:46.445457   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.445994   31154 pod_ready.go:93] pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:46.446014   31154 pod_ready.go:82] duration metric: took 399.112887ms for pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:46.446031   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:46.642080   31154 request.go:632] Waited for 195.96912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737
	I1001 19:21:46.642133   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737
	I1001 19:21:46.642138   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:46.642146   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:46.642149   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:46.645872   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.842116   31154 request.go:632] Waited for 195.42226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:46.842206   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:46.842215   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:46.842223   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:46.842231   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:46.845287   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:46.845743   31154 pod_ready.go:93] pod "kube-controller-manager-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:46.845760   31154 pod_ready.go:82] duration metric: took 399.720077ms for pod "kube-controller-manager-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:46.845770   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:47.042048   31154 request.go:632] Waited for 196.194982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m02
	I1001 19:21:47.042116   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m02
	I1001 19:21:47.042122   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:47.042129   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:47.042134   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:47.045174   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:47.242154   31154 request.go:632] Waited for 196.389668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:47.242211   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:47.242216   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:47.242224   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:47.242228   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:47.246078   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:47.246437   31154 pod_ready.go:93] pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:47.246460   31154 pod_ready.go:82] duration metric: took 400.684034ms for pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:47.246470   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4294m" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:47.442023   31154 request.go:632] Waited for 195.496186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4294m
	I1001 19:21:47.442102   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4294m
	I1001 19:21:47.442107   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:47.442115   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:47.442119   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:47.446724   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:47.642099   31154 request.go:632] Waited for 194.348221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:47.642163   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:47.642174   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:47.642181   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:47.642186   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:47.645393   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:47.645928   31154 pod_ready.go:93] pod "kube-proxy-4294m" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:47.645950   31154 pod_ready.go:82] duration metric: took 399.472712ms for pod "kube-proxy-4294m" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:47.645961   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zpsll" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:47.842563   31154 request.go:632] Waited for 196.53672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zpsll
	I1001 19:21:47.842654   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zpsll
	I1001 19:21:47.842670   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:47.842677   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:47.842685   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:47.846435   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:48.042435   31154 request.go:632] Waited for 195.268783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:48.042516   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:48.042523   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.042531   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.042535   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.045444   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:48.045979   31154 pod_ready.go:93] pod "kube-proxy-zpsll" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:48.045999   31154 pod_ready.go:82] duration metric: took 400.030874ms for pod "kube-proxy-zpsll" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:48.046008   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:48.242127   31154 request.go:632] Waited for 196.061352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737
	I1001 19:21:48.242188   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737
	I1001 19:21:48.242194   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.242200   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.242205   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.245701   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:48.442714   31154 request.go:632] Waited for 196.392016ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:48.442788   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:21:48.442796   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.442806   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.442811   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.445488   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:21:48.445923   31154 pod_ready.go:93] pod "kube-scheduler-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:48.445941   31154 pod_ready.go:82] duration metric: took 399.927294ms for pod "kube-scheduler-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:48.445950   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:48.642436   31154 request.go:632] Waited for 196.414559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m02
	I1001 19:21:48.642504   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m02
	I1001 19:21:48.642511   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.642520   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.642528   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.645886   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:48.841792   31154 request.go:632] Waited for 195.303821ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:48.841877   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:21:48.841893   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.841907   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.841917   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.845141   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:48.845610   31154 pod_ready.go:93] pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:21:48.845627   31154 pod_ready.go:82] duration metric: took 399.670346ms for pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:21:48.845638   31154 pod_ready.go:39] duration metric: took 3.199000029s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 19:21:48.845650   31154 api_server.go:52] waiting for apiserver process to appear ...
	I1001 19:21:48.845706   31154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 19:21:48.860102   31154 api_server.go:72] duration metric: took 22.544907394s to wait for apiserver process to appear ...
	I1001 19:21:48.860136   31154 api_server.go:88] waiting for apiserver healthz status ...
	I1001 19:21:48.860157   31154 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I1001 19:21:48.864372   31154 api_server.go:279] https://192.168.39.14:8443/healthz returned 200:
	ok
	I1001 19:21:48.864454   31154 round_trippers.go:463] GET https://192.168.39.14:8443/version
	I1001 19:21:48.864464   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:48.864471   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:48.864475   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:48.865481   31154 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1001 19:21:48.865563   31154 api_server.go:141] control plane version: v1.31.1
	I1001 19:21:48.865578   31154 api_server.go:131] duration metric: took 5.43668ms to wait for apiserver health ...
	I1001 19:21:48.865588   31154 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 19:21:49.042005   31154 request.go:632] Waited for 176.346586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:21:49.042080   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:21:49.042086   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:49.042096   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:49.042103   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:49.046797   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:49.050697   31154 system_pods.go:59] 17 kube-system pods found
	I1001 19:21:49.050730   31154 system_pods.go:61] "coredns-7c65d6cfc9-hd5hv" [31f0afff-5571-46d6-888f-8982c71ba191] Running
	I1001 19:21:49.050741   31154 system_pods.go:61] "coredns-7c65d6cfc9-v2wsx" [8e3dd318-5017-4ada-bf2f-61b640ee2030] Running
	I1001 19:21:49.050745   31154 system_pods.go:61] "etcd-ha-193737" [99c3674d-50d6-4160-89dd-afb3c2e71039] Running
	I1001 19:21:49.050749   31154 system_pods.go:61] "etcd-ha-193737-m02" [a541b9c8-c10e-45b4-ac9c-d16e8bf659b0] Running
	I1001 19:21:49.050752   31154 system_pods.go:61] "kindnet-drdlr" [13177890-a0eb-47ff-8fe2-585992810e47] Running
	I1001 19:21:49.050755   31154 system_pods.go:61] "kindnet-wnr6g" [89e11419-0c5c-486e-bdbf-eaf6fab1e62c] Running
	I1001 19:21:49.050758   31154 system_pods.go:61] "kube-apiserver-ha-193737" [432bf6a6-4607-4f55-a026-d910075d3145] Running
	I1001 19:21:49.050761   31154 system_pods.go:61] "kube-apiserver-ha-193737-m02" [379a24af-2148-4870-9f4b-25ad48943445] Running
	I1001 19:21:49.050764   31154 system_pods.go:61] "kube-controller-manager-ha-193737" [95fb44db-8cb3-4db9-8a84-ab82e15760de] Running
	I1001 19:21:49.050768   31154 system_pods.go:61] "kube-controller-manager-ha-193737-m02" [5b0821e0-673a-440c-8ca1-fefbfe913b94] Running
	I1001 19:21:49.050771   31154 system_pods.go:61] "kube-proxy-4294m" [f454ca0f-d662-4dfd-ab77-ebccaf3a6b12] Running
	I1001 19:21:49.050773   31154 system_pods.go:61] "kube-proxy-zpsll" [c18fec3c-2880-4860-b220-a44d5e523bed] Running
	I1001 19:21:49.050777   31154 system_pods.go:61] "kube-scheduler-ha-193737" [46ca2b37-6145-4111-9088-bcf51307be3a] Running
	I1001 19:21:49.050780   31154 system_pods.go:61] "kube-scheduler-ha-193737-m02" [72be6cd5-d226-46d6-b675-99014e544dfb] Running
	I1001 19:21:49.050783   31154 system_pods.go:61] "kube-vip-ha-193737" [cbe8e6a4-08f3-4db3-af4d-810a5592597c] Running
	I1001 19:21:49.050790   31154 system_pods.go:61] "kube-vip-ha-193737-m02" [da7dfa59-1b27-45ce-9e61-ad8da40ec548] Running
	I1001 19:21:49.050793   31154 system_pods.go:61] "storage-provisioner" [d5b587a6-418b-47e5-9bf7-3fb6fa5e3372] Running
	I1001 19:21:49.050802   31154 system_pods.go:74] duration metric: took 185.209049ms to wait for pod list to return data ...
	I1001 19:21:49.050812   31154 default_sa.go:34] waiting for default service account to be created ...
	I1001 19:21:49.242249   31154 request.go:632] Waited for 191.355869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
	I1001 19:21:49.242329   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
	I1001 19:21:49.242336   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:49.242346   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:49.242365   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:49.246320   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:21:49.246557   31154 default_sa.go:45] found service account: "default"
	I1001 19:21:49.246575   31154 default_sa.go:55] duration metric: took 195.756912ms for default service account to be created ...
	I1001 19:21:49.246582   31154 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 19:21:49.442016   31154 request.go:632] Waited for 195.370336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:21:49.442076   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:21:49.442083   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:49.442092   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:49.442101   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:49.446494   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:49.452730   31154 system_pods.go:86] 17 kube-system pods found
	I1001 19:21:49.452758   31154 system_pods.go:89] "coredns-7c65d6cfc9-hd5hv" [31f0afff-5571-46d6-888f-8982c71ba191] Running
	I1001 19:21:49.452764   31154 system_pods.go:89] "coredns-7c65d6cfc9-v2wsx" [8e3dd318-5017-4ada-bf2f-61b640ee2030] Running
	I1001 19:21:49.452768   31154 system_pods.go:89] "etcd-ha-193737" [99c3674d-50d6-4160-89dd-afb3c2e71039] Running
	I1001 19:21:49.452772   31154 system_pods.go:89] "etcd-ha-193737-m02" [a541b9c8-c10e-45b4-ac9c-d16e8bf659b0] Running
	I1001 19:21:49.452775   31154 system_pods.go:89] "kindnet-drdlr" [13177890-a0eb-47ff-8fe2-585992810e47] Running
	I1001 19:21:49.452778   31154 system_pods.go:89] "kindnet-wnr6g" [89e11419-0c5c-486e-bdbf-eaf6fab1e62c] Running
	I1001 19:21:49.452781   31154 system_pods.go:89] "kube-apiserver-ha-193737" [432bf6a6-4607-4f55-a026-d910075d3145] Running
	I1001 19:21:49.452784   31154 system_pods.go:89] "kube-apiserver-ha-193737-m02" [379a24af-2148-4870-9f4b-25ad48943445] Running
	I1001 19:21:49.452788   31154 system_pods.go:89] "kube-controller-manager-ha-193737" [95fb44db-8cb3-4db9-8a84-ab82e15760de] Running
	I1001 19:21:49.452791   31154 system_pods.go:89] "kube-controller-manager-ha-193737-m02" [5b0821e0-673a-440c-8ca1-fefbfe913b94] Running
	I1001 19:21:49.452793   31154 system_pods.go:89] "kube-proxy-4294m" [f454ca0f-d662-4dfd-ab77-ebccaf3a6b12] Running
	I1001 19:21:49.452803   31154 system_pods.go:89] "kube-proxy-zpsll" [c18fec3c-2880-4860-b220-a44d5e523bed] Running
	I1001 19:21:49.452806   31154 system_pods.go:89] "kube-scheduler-ha-193737" [46ca2b37-6145-4111-9088-bcf51307be3a] Running
	I1001 19:21:49.452809   31154 system_pods.go:89] "kube-scheduler-ha-193737-m02" [72be6cd5-d226-46d6-b675-99014e544dfb] Running
	I1001 19:21:49.452812   31154 system_pods.go:89] "kube-vip-ha-193737" [cbe8e6a4-08f3-4db3-af4d-810a5592597c] Running
	I1001 19:21:49.452815   31154 system_pods.go:89] "kube-vip-ha-193737-m02" [da7dfa59-1b27-45ce-9e61-ad8da40ec548] Running
	I1001 19:21:49.452817   31154 system_pods.go:89] "storage-provisioner" [d5b587a6-418b-47e5-9bf7-3fb6fa5e3372] Running
	I1001 19:21:49.452823   31154 system_pods.go:126] duration metric: took 206.236353ms to wait for k8s-apps to be running ...
	I1001 19:21:49.452833   31154 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 19:21:49.452882   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 19:21:49.467775   31154 system_svc.go:56] duration metric: took 14.93254ms WaitForService to wait for kubelet
	I1001 19:21:49.467809   31154 kubeadm.go:582] duration metric: took 23.152617942s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 19:21:49.467833   31154 node_conditions.go:102] verifying NodePressure condition ...
	I1001 19:21:49.642303   31154 request.go:632] Waited for 174.372716ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes
	I1001 19:21:49.642352   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes
	I1001 19:21:49.642356   31154 round_trippers.go:469] Request Headers:
	I1001 19:21:49.642364   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:21:49.642369   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:21:49.646440   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:21:49.647131   31154 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 19:21:49.647176   31154 node_conditions.go:123] node cpu capacity is 2
	I1001 19:21:49.647192   31154 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 19:21:49.647199   31154 node_conditions.go:123] node cpu capacity is 2
	I1001 19:21:49.647206   31154 node_conditions.go:105] duration metric: took 179.366973ms to run NodePressure ...
	I1001 19:21:49.647235   31154 start.go:241] waiting for startup goroutines ...
	I1001 19:21:49.647267   31154 start.go:255] writing updated cluster config ...
	I1001 19:21:49.649327   31154 out.go:201] 
	I1001 19:21:49.650621   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:21:49.650719   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:21:49.652065   31154 out.go:177] * Starting "ha-193737-m03" control-plane node in "ha-193737" cluster
	I1001 19:21:49.653048   31154 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:21:49.653076   31154 cache.go:56] Caching tarball of preloaded images
	I1001 19:21:49.653193   31154 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 19:21:49.653209   31154 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 19:21:49.653361   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:21:49.653640   31154 start.go:360] acquireMachinesLock for ha-193737-m03: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 19:21:49.653690   31154 start.go:364] duration metric: took 31.444µs to acquireMachinesLock for "ha-193737-m03"
	I1001 19:21:49.653709   31154 start.go:93] Provisioning new machine with config: &{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:21:49.653808   31154 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1001 19:21:49.655218   31154 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 19:21:49.655330   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:21:49.655375   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:21:49.671457   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36635
	I1001 19:21:49.672015   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:21:49.672579   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:21:49.672608   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:21:49.673005   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:21:49.673189   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetMachineName
	I1001 19:21:49.673372   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:21:49.673585   31154 start.go:159] libmachine.API.Create for "ha-193737" (driver="kvm2")
	I1001 19:21:49.673614   31154 client.go:168] LocalClient.Create starting
	I1001 19:21:49.673650   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem
	I1001 19:21:49.673691   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:21:49.673722   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:21:49.673797   31154 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem
	I1001 19:21:49.673824   31154 main.go:141] libmachine: Decoding PEM data...
	I1001 19:21:49.673838   31154 main.go:141] libmachine: Parsing certificate...
	I1001 19:21:49.673873   31154 main.go:141] libmachine: Running pre-create checks...
	I1001 19:21:49.673885   31154 main.go:141] libmachine: (ha-193737-m03) Calling .PreCreateCheck
	I1001 19:21:49.674030   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetConfigRaw
	I1001 19:21:49.674391   31154 main.go:141] libmachine: Creating machine...
	I1001 19:21:49.674405   31154 main.go:141] libmachine: (ha-193737-m03) Calling .Create
	I1001 19:21:49.674509   31154 main.go:141] libmachine: (ha-193737-m03) Creating KVM machine...
	I1001 19:21:49.675629   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found existing default KVM network
	I1001 19:21:49.675774   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found existing private KVM network mk-ha-193737
	I1001 19:21:49.675890   31154 main.go:141] libmachine: (ha-193737-m03) Setting up store path in /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03 ...
	I1001 19:21:49.675911   31154 main.go:141] libmachine: (ha-193737-m03) Building disk image from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 19:21:49.675957   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:49.675868   32386 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:21:49.676067   31154 main.go:141] libmachine: (ha-193737-m03) Downloading /home/jenkins/minikube-integration/19736-11198/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 19:21:49.919887   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:49.919775   32386 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa...
	I1001 19:21:50.197974   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:50.197797   32386 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/ha-193737-m03.rawdisk...
	I1001 19:21:50.198009   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Writing magic tar header
	I1001 19:21:50.198030   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Writing SSH key tar header
	I1001 19:21:50.198044   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03 (perms=drwx------)
	I1001 19:21:50.198058   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:50.197915   32386 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03 ...
	I1001 19:21:50.198069   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines (perms=drwxr-xr-x)
	I1001 19:21:50.198088   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube (perms=drwxr-xr-x)
	I1001 19:21:50.198099   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198 (perms=drwxrwxr-x)
	I1001 19:21:50.198109   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 19:21:50.198128   31154 main.go:141] libmachine: (ha-193737-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 19:21:50.198141   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03
	I1001 19:21:50.198152   31154 main.go:141] libmachine: (ha-193737-m03) Creating domain...
	I1001 19:21:50.198180   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines
	I1001 19:21:50.198190   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:21:50.198206   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198
	I1001 19:21:50.198215   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 19:21:50.198224   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home/jenkins
	I1001 19:21:50.198235   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Checking permissions on dir: /home
	I1001 19:21:50.198248   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Skipping /home - not owner
	I1001 19:21:50.199136   31154 main.go:141] libmachine: (ha-193737-m03) define libvirt domain using xml: 
	I1001 19:21:50.199163   31154 main.go:141] libmachine: (ha-193737-m03) <domain type='kvm'>
	I1001 19:21:50.199174   31154 main.go:141] libmachine: (ha-193737-m03)   <name>ha-193737-m03</name>
	I1001 19:21:50.199182   31154 main.go:141] libmachine: (ha-193737-m03)   <memory unit='MiB'>2200</memory>
	I1001 19:21:50.199192   31154 main.go:141] libmachine: (ha-193737-m03)   <vcpu>2</vcpu>
	I1001 19:21:50.199198   31154 main.go:141] libmachine: (ha-193737-m03)   <features>
	I1001 19:21:50.199207   31154 main.go:141] libmachine: (ha-193737-m03)     <acpi/>
	I1001 19:21:50.199216   31154 main.go:141] libmachine: (ha-193737-m03)     <apic/>
	I1001 19:21:50.199226   31154 main.go:141] libmachine: (ha-193737-m03)     <pae/>
	I1001 19:21:50.199234   31154 main.go:141] libmachine: (ha-193737-m03)     
	I1001 19:21:50.199241   31154 main.go:141] libmachine: (ha-193737-m03)   </features>
	I1001 19:21:50.199248   31154 main.go:141] libmachine: (ha-193737-m03)   <cpu mode='host-passthrough'>
	I1001 19:21:50.199270   31154 main.go:141] libmachine: (ha-193737-m03)   
	I1001 19:21:50.199286   31154 main.go:141] libmachine: (ha-193737-m03)   </cpu>
	I1001 19:21:50.199295   31154 main.go:141] libmachine: (ha-193737-m03)   <os>
	I1001 19:21:50.199303   31154 main.go:141] libmachine: (ha-193737-m03)     <type>hvm</type>
	I1001 19:21:50.199315   31154 main.go:141] libmachine: (ha-193737-m03)     <boot dev='cdrom'/>
	I1001 19:21:50.199323   31154 main.go:141] libmachine: (ha-193737-m03)     <boot dev='hd'/>
	I1001 19:21:50.199334   31154 main.go:141] libmachine: (ha-193737-m03)     <bootmenu enable='no'/>
	I1001 19:21:50.199343   31154 main.go:141] libmachine: (ha-193737-m03)   </os>
	I1001 19:21:50.199352   31154 main.go:141] libmachine: (ha-193737-m03)   <devices>
	I1001 19:21:50.199367   31154 main.go:141] libmachine: (ha-193737-m03)     <disk type='file' device='cdrom'>
	I1001 19:21:50.199383   31154 main.go:141] libmachine: (ha-193737-m03)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/boot2docker.iso'/>
	I1001 19:21:50.199394   31154 main.go:141] libmachine: (ha-193737-m03)       <target dev='hdc' bus='scsi'/>
	I1001 19:21:50.199404   31154 main.go:141] libmachine: (ha-193737-m03)       <readonly/>
	I1001 19:21:50.199413   31154 main.go:141] libmachine: (ha-193737-m03)     </disk>
	I1001 19:21:50.199425   31154 main.go:141] libmachine: (ha-193737-m03)     <disk type='file' device='disk'>
	I1001 19:21:50.199441   31154 main.go:141] libmachine: (ha-193737-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 19:21:50.199458   31154 main.go:141] libmachine: (ha-193737-m03)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/ha-193737-m03.rawdisk'/>
	I1001 19:21:50.199468   31154 main.go:141] libmachine: (ha-193737-m03)       <target dev='hda' bus='virtio'/>
	I1001 19:21:50.199477   31154 main.go:141] libmachine: (ha-193737-m03)     </disk>
	I1001 19:21:50.199486   31154 main.go:141] libmachine: (ha-193737-m03)     <interface type='network'>
	I1001 19:21:50.199495   31154 main.go:141] libmachine: (ha-193737-m03)       <source network='mk-ha-193737'/>
	I1001 19:21:50.199503   31154 main.go:141] libmachine: (ha-193737-m03)       <model type='virtio'/>
	I1001 19:21:50.199531   31154 main.go:141] libmachine: (ha-193737-m03)     </interface>
	I1001 19:21:50.199562   31154 main.go:141] libmachine: (ha-193737-m03)     <interface type='network'>
	I1001 19:21:50.199576   31154 main.go:141] libmachine: (ha-193737-m03)       <source network='default'/>
	I1001 19:21:50.199588   31154 main.go:141] libmachine: (ha-193737-m03)       <model type='virtio'/>
	I1001 19:21:50.199599   31154 main.go:141] libmachine: (ha-193737-m03)     </interface>
	I1001 19:21:50.199608   31154 main.go:141] libmachine: (ha-193737-m03)     <serial type='pty'>
	I1001 19:21:50.199619   31154 main.go:141] libmachine: (ha-193737-m03)       <target port='0'/>
	I1001 19:21:50.199627   31154 main.go:141] libmachine: (ha-193737-m03)     </serial>
	I1001 19:21:50.199662   31154 main.go:141] libmachine: (ha-193737-m03)     <console type='pty'>
	I1001 19:21:50.199708   31154 main.go:141] libmachine: (ha-193737-m03)       <target type='serial' port='0'/>
	I1001 19:21:50.199726   31154 main.go:141] libmachine: (ha-193737-m03)     </console>
	I1001 19:21:50.199748   31154 main.go:141] libmachine: (ha-193737-m03)     <rng model='virtio'>
	I1001 19:21:50.199767   31154 main.go:141] libmachine: (ha-193737-m03)       <backend model='random'>/dev/random</backend>
	I1001 19:21:50.199780   31154 main.go:141] libmachine: (ha-193737-m03)     </rng>
	I1001 19:21:50.199794   31154 main.go:141] libmachine: (ha-193737-m03)     
	I1001 19:21:50.199803   31154 main.go:141] libmachine: (ha-193737-m03)     
	I1001 19:21:50.199814   31154 main.go:141] libmachine: (ha-193737-m03)   </devices>
	I1001 19:21:50.199820   31154 main.go:141] libmachine: (ha-193737-m03) </domain>
	I1001 19:21:50.199837   31154 main.go:141] libmachine: (ha-193737-m03) 
	I1001 19:21:50.206580   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:8b:a8:e7 in network default
	I1001 19:21:50.207376   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:50.207405   31154 main.go:141] libmachine: (ha-193737-m03) Ensuring networks are active...
	I1001 19:21:50.208168   31154 main.go:141] libmachine: (ha-193737-m03) Ensuring network default is active
	I1001 19:21:50.208498   31154 main.go:141] libmachine: (ha-193737-m03) Ensuring network mk-ha-193737 is active
	I1001 19:21:50.208873   31154 main.go:141] libmachine: (ha-193737-m03) Getting domain xml...
	I1001 19:21:50.209740   31154 main.go:141] libmachine: (ha-193737-m03) Creating domain...
	I1001 19:21:51.487699   31154 main.go:141] libmachine: (ha-193737-m03) Waiting to get IP...
	I1001 19:21:51.488558   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:51.488971   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:51.488988   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:51.488956   32386 retry.go:31] will retry after 292.057466ms: waiting for machine to come up
	I1001 19:21:51.782677   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:51.783145   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:51.783197   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:51.783106   32386 retry.go:31] will retry after 354.701551ms: waiting for machine to come up
	I1001 19:21:52.139803   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:52.140295   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:52.140322   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:52.140239   32386 retry.go:31] will retry after 363.996754ms: waiting for machine to come up
	I1001 19:21:52.505881   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:52.506427   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:52.506447   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:52.506386   32386 retry.go:31] will retry after 414.43192ms: waiting for machine to come up
	I1001 19:21:52.922204   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:52.922737   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:52.922766   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:52.922724   32386 retry.go:31] will retry after 579.407554ms: waiting for machine to come up
	I1001 19:21:53.503613   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:53.504058   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:53.504085   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:53.504000   32386 retry.go:31] will retry after 721.311664ms: waiting for machine to come up
	I1001 19:21:54.227110   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:54.227610   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:54.227655   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:54.227567   32386 retry.go:31] will retry after 1.130708111s: waiting for machine to come up
	I1001 19:21:55.360491   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:55.360900   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:55.360926   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:55.360870   32386 retry.go:31] will retry after 1.468803938s: waiting for machine to come up
	I1001 19:21:56.831225   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:56.831722   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:56.831750   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:56.831677   32386 retry.go:31] will retry after 1.742550848s: waiting for machine to come up
	I1001 19:21:58.576460   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:21:58.576859   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:21:58.576883   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:21:58.576823   32386 retry.go:31] will retry after 1.623668695s: waiting for machine to come up
	I1001 19:22:00.201759   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:00.202340   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:22:00.202361   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:22:00.202290   32386 retry.go:31] will retry after 1.997667198s: waiting for machine to come up
	I1001 19:22:02.201433   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:02.201901   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:22:02.201917   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:22:02.201868   32386 retry.go:31] will retry after 2.886327611s: waiting for machine to come up
	I1001 19:22:05.090402   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:05.090907   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:22:05.090933   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:22:05.090844   32386 retry.go:31] will retry after 3.87427099s: waiting for machine to come up
	I1001 19:22:08.966290   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:08.966719   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find current IP address of domain ha-193737-m03 in network mk-ha-193737
	I1001 19:22:08.966754   31154 main.go:141] libmachine: (ha-193737-m03) DBG | I1001 19:22:08.966674   32386 retry.go:31] will retry after 4.039315752s: waiting for machine to come up
	I1001 19:22:13.009358   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.009842   31154 main.go:141] libmachine: (ha-193737-m03) Found IP for machine: 192.168.39.101
	I1001 19:22:13.009868   31154 main.go:141] libmachine: (ha-193737-m03) Reserving static IP address...
	I1001 19:22:13.009881   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has current primary IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.010863   31154 main.go:141] libmachine: (ha-193737-m03) DBG | unable to find host DHCP lease matching {name: "ha-193737-m03", mac: "52:54:00:9e:b9:5c", ip: "192.168.39.101"} in network mk-ha-193737
	I1001 19:22:13.088968   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Getting to WaitForSSH function...
	I1001 19:22:13.088993   31154 main.go:141] libmachine: (ha-193737-m03) Reserved static IP address: 192.168.39.101
	I1001 19:22:13.089006   31154 main.go:141] libmachine: (ha-193737-m03) Waiting for SSH to be available...
	I1001 19:22:13.091870   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.092415   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.092449   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.092644   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Using SSH client type: external
	I1001 19:22:13.092667   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa (-rw-------)
	I1001 19:22:13.092694   31154 main.go:141] libmachine: (ha-193737-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 19:22:13.092712   31154 main.go:141] libmachine: (ha-193737-m03) DBG | About to run SSH command:
	I1001 19:22:13.092731   31154 main.go:141] libmachine: (ha-193737-m03) DBG | exit 0
	I1001 19:22:13.220534   31154 main.go:141] libmachine: (ha-193737-m03) DBG | SSH cmd err, output: <nil>: 
	I1001 19:22:13.220779   31154 main.go:141] libmachine: (ha-193737-m03) KVM machine creation complete!
	I1001 19:22:13.221074   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetConfigRaw
	I1001 19:22:13.221579   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:13.221804   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:13.221984   31154 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 19:22:13.222002   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetState
	I1001 19:22:13.223279   31154 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 19:22:13.223293   31154 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 19:22:13.223299   31154 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 19:22:13.223305   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.225923   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.226398   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.226416   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.226678   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:13.226887   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.227052   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.227186   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:13.227368   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:13.227559   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:13.227571   31154 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 19:22:13.332328   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:22:13.332352   31154 main.go:141] libmachine: Detecting the provisioner...
	I1001 19:22:13.332384   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.335169   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.335569   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.335603   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.335764   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:13.336042   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.336239   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.336386   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:13.336591   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:13.336771   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:13.336783   31154 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 19:22:13.445518   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 19:22:13.445586   31154 main.go:141] libmachine: found compatible host: buildroot
	I1001 19:22:13.445594   31154 main.go:141] libmachine: Provisioning with buildroot...
	I1001 19:22:13.445601   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetMachineName
	I1001 19:22:13.445821   31154 buildroot.go:166] provisioning hostname "ha-193737-m03"
	I1001 19:22:13.445847   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetMachineName
	I1001 19:22:13.446042   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.449433   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.449860   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.449897   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.450180   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:13.450368   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.450566   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.450713   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:13.450881   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:13.451039   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:13.451051   31154 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-193737-m03 && echo "ha-193737-m03" | sudo tee /etc/hostname
	I1001 19:22:13.572777   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-193737-m03
	
	I1001 19:22:13.572810   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.575494   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.575835   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.575859   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.576047   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:13.576235   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.576419   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.576571   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:13.576759   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:13.576956   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:13.576973   31154 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-193737-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-193737-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-193737-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 19:22:13.689983   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:22:13.690015   31154 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 19:22:13.690038   31154 buildroot.go:174] setting up certificates
	I1001 19:22:13.690050   31154 provision.go:84] configureAuth start
	I1001 19:22:13.690066   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetMachineName
	I1001 19:22:13.690369   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetIP
	I1001 19:22:13.693242   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.693664   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.693693   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.693840   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.696141   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.696495   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.696524   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.696638   31154 provision.go:143] copyHostCerts
	I1001 19:22:13.696676   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:22:13.696720   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 19:22:13.696731   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:22:13.696821   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 19:22:13.696919   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:22:13.696949   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 19:22:13.696960   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:22:13.697003   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 19:22:13.697067   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:22:13.697091   31154 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 19:22:13.697100   31154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:22:13.697136   31154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 19:22:13.697206   31154 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.ha-193737-m03 san=[127.0.0.1 192.168.39.101 ha-193737-m03 localhost minikube]
	I1001 19:22:13.877573   31154 provision.go:177] copyRemoteCerts
	I1001 19:22:13.877625   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 19:22:13.877649   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:13.880678   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.880932   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:13.880970   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:13.881176   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:13.881406   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:13.881587   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:13.881804   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa Username:docker}
	I1001 19:22:13.962987   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 19:22:13.963068   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 19:22:13.986966   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 19:22:13.987070   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 19:22:14.013722   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 19:22:14.013794   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 19:22:14.037854   31154 provision.go:87] duration metric: took 347.788312ms to configureAuth
	I1001 19:22:14.037883   31154 buildroot.go:189] setting minikube options for container-runtime
	I1001 19:22:14.038135   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:22:14.038209   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:14.040944   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.041372   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.041401   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.041587   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:14.041771   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.041906   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.042003   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:14.042139   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:14.042328   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:14.042345   31154 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 19:22:14.262634   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 19:22:14.262673   31154 main.go:141] libmachine: Checking connection to Docker...
	I1001 19:22:14.262687   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetURL
	I1001 19:22:14.263998   31154 main.go:141] libmachine: (ha-193737-m03) DBG | Using libvirt version 6000000
	I1001 19:22:14.266567   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.266926   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.266955   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.267154   31154 main.go:141] libmachine: Docker is up and running!
	I1001 19:22:14.267166   31154 main.go:141] libmachine: Reticulating splines...
	I1001 19:22:14.267173   31154 client.go:171] duration metric: took 24.593551771s to LocalClient.Create
	I1001 19:22:14.267196   31154 start.go:167] duration metric: took 24.593612564s to libmachine.API.Create "ha-193737"
	I1001 19:22:14.267205   31154 start.go:293] postStartSetup for "ha-193737-m03" (driver="kvm2")
	I1001 19:22:14.267214   31154 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 19:22:14.267240   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:14.267459   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 19:22:14.267484   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:14.269571   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.269977   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.270004   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.270121   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:14.270292   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.270427   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:14.270551   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa Username:docker}
	I1001 19:22:14.350988   31154 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 19:22:14.355823   31154 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 19:22:14.355848   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 19:22:14.355915   31154 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 19:22:14.355986   31154 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 19:22:14.355994   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /etc/ssl/certs/184302.pem
	I1001 19:22:14.356070   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 19:22:14.366040   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:22:14.390055   31154 start.go:296] duration metric: took 122.835456ms for postStartSetup
	I1001 19:22:14.390108   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetConfigRaw
	I1001 19:22:14.390696   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetIP
	I1001 19:22:14.394065   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.394508   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.394536   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.394910   31154 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:22:14.395150   31154 start.go:128] duration metric: took 24.741329773s to createHost
	I1001 19:22:14.395182   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:14.397581   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.397994   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.398017   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.398188   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:14.398403   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.398574   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.398727   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:14.398880   31154 main.go:141] libmachine: Using SSH client type: native
	I1001 19:22:14.399094   31154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1001 19:22:14.399111   31154 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 19:22:14.505599   31154 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727810534.482085733
	
	I1001 19:22:14.505628   31154 fix.go:216] guest clock: 1727810534.482085733
	I1001 19:22:14.505639   31154 fix.go:229] Guest: 2024-10-01 19:22:14.482085733 +0000 UTC Remote: 2024-10-01 19:22:14.395166889 +0000 UTC m=+146.623005707 (delta=86.918844ms)
	I1001 19:22:14.505658   31154 fix.go:200] guest clock delta is within tolerance: 86.918844ms
	I1001 19:22:14.505664   31154 start.go:83] releasing machines lock for "ha-193737-m03", held for 24.851963464s
	I1001 19:22:14.505684   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:14.505908   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetIP
	I1001 19:22:14.508696   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.509064   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.509086   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.511117   31154 out.go:177] * Found network options:
	I1001 19:22:14.512450   31154 out.go:177]   - NO_PROXY=192.168.39.14,192.168.39.27
	W1001 19:22:14.513603   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	W1001 19:22:14.513632   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 19:22:14.513653   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:14.514254   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:14.514460   31154 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:22:14.514553   31154 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 19:22:14.514592   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	W1001 19:22:14.514627   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	W1001 19:22:14.514652   31154 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 19:22:14.514726   31154 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 19:22:14.514748   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:22:14.517511   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.517716   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.517872   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.517897   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.518069   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:14.518071   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:14.518151   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:14.518298   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:22:14.518302   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.518474   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:14.518512   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:22:14.518613   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:22:14.518617   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa Username:docker}
	I1001 19:22:14.518740   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa Username:docker}
	I1001 19:22:14.749140   31154 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 19:22:14.755011   31154 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 19:22:14.755083   31154 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 19:22:14.772351   31154 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 19:22:14.772388   31154 start.go:495] detecting cgroup driver to use...
	I1001 19:22:14.772457   31154 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 19:22:14.789303   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 19:22:14.804840   31154 docker.go:217] disabling cri-docker service (if available) ...
	I1001 19:22:14.804906   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 19:22:14.819518   31154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 19:22:14.834095   31154 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 19:22:14.944783   31154 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 19:22:15.079717   31154 docker.go:233] disabling docker service ...
	I1001 19:22:15.079790   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 19:22:15.095162   31154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 19:22:15.107998   31154 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 19:22:15.243729   31154 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 19:22:15.377225   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 19:22:15.391343   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 19:22:15.411068   31154 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 19:22:15.411143   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.423227   31154 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 19:22:15.423294   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.434691   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.446242   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.457352   31154 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 19:22:15.469147   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.479924   31154 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.497221   31154 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:22:15.507678   31154 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 19:22:15.517482   31154 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 19:22:15.517554   31154 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 19:22:15.532214   31154 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 19:22:15.541788   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:22:15.665094   31154 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 19:22:15.757492   31154 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 19:22:15.757569   31154 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 19:22:15.762004   31154 start.go:563] Will wait 60s for crictl version
	I1001 19:22:15.762063   31154 ssh_runner.go:195] Run: which crictl
	I1001 19:22:15.766039   31154 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 19:22:15.802516   31154 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 19:22:15.802600   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:22:15.831926   31154 ssh_runner.go:195] Run: crio --version
	I1001 19:22:15.862187   31154 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 19:22:15.863552   31154 out.go:177]   - env NO_PROXY=192.168.39.14
	I1001 19:22:15.864903   31154 out.go:177]   - env NO_PROXY=192.168.39.14,192.168.39.27
	I1001 19:22:15.866357   31154 main.go:141] libmachine: (ha-193737-m03) Calling .GetIP
	I1001 19:22:15.868791   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:15.869113   31154 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:22:15.869142   31154 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:22:15.869293   31154 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 19:22:15.873237   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:22:15.885293   31154 mustload.go:65] Loading cluster: ha-193737
	I1001 19:22:15.885514   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:22:15.885795   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:22:15.885838   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:22:15.901055   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45767
	I1001 19:22:15.901633   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:22:15.902627   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:22:15.902658   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:22:15.903034   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:22:15.903198   31154 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:22:15.905017   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:22:15.905429   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:22:15.905488   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:22:15.921741   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I1001 19:22:15.922203   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:22:15.923200   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:22:15.923220   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:22:15.923541   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:22:15.923744   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:22:15.923907   31154 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737 for IP: 192.168.39.101
	I1001 19:22:15.923919   31154 certs.go:194] generating shared ca certs ...
	I1001 19:22:15.923941   31154 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:22:15.924081   31154 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 19:22:15.924118   31154 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 19:22:15.924126   31154 certs.go:256] generating profile certs ...
	I1001 19:22:15.924217   31154 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key
	I1001 19:22:15.924242   31154 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.09da423f
	I1001 19:22:15.924256   31154 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.09da423f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.14 192.168.39.27 192.168.39.101 192.168.39.254]
	I1001 19:22:16.102464   31154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.09da423f ...
	I1001 19:22:16.102493   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.09da423f: {Name:mk41b913f57e7f10c713b2e18136c742f7b09ec4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:22:16.102655   31154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.09da423f ...
	I1001 19:22:16.102668   31154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.09da423f: {Name:mkaf44cea34e6bfbac4ea8c8d70ebec43d2a6d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:22:16.102739   31154 certs.go:381] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.09da423f -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt
	I1001 19:22:16.102870   31154 certs.go:385] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.09da423f -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key
	I1001 19:22:16.102988   31154 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key
	I1001 19:22:16.103003   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 19:22:16.103016   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 19:22:16.103030   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 19:22:16.103042   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 19:22:16.103054   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 19:22:16.103067   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 19:22:16.103081   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 19:22:16.120441   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 19:22:16.120535   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 19:22:16.120569   31154 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 19:22:16.120579   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 19:22:16.120602   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 19:22:16.120624   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 19:22:16.120682   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 19:22:16.120730   31154 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:22:16.120759   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /usr/share/ca-certificates/184302.pem
	I1001 19:22:16.120772   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:22:16.120784   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem -> /usr/share/ca-certificates/18430.pem
	I1001 19:22:16.120814   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:22:16.123512   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:22:16.123983   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:22:16.124012   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:22:16.124198   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:22:16.124425   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:22:16.124611   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:22:16.124747   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:22:16.196684   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1001 19:22:16.201293   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1001 19:22:16.211163   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1001 19:22:16.215061   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1001 19:22:16.225018   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1001 19:22:16.228909   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1001 19:22:16.239430   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1001 19:22:16.243222   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1001 19:22:16.253163   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1001 19:22:16.256929   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1001 19:22:16.266378   31154 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1001 19:22:16.270062   31154 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1001 19:22:16.278964   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 19:22:16.303288   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 19:22:16.326243   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 19:22:16.347460   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 19:22:16.372037   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1001 19:22:16.396287   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 19:22:16.420724   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 19:22:16.445707   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 19:22:16.468539   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 19:22:16.492971   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 19:22:16.517838   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 19:22:16.541960   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1001 19:22:16.557831   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1001 19:22:16.573594   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1001 19:22:16.590168   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1001 19:22:16.607168   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1001 19:22:16.623957   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1001 19:22:16.640438   31154 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1001 19:22:16.655967   31154 ssh_runner.go:195] Run: openssl version
	I1001 19:22:16.661524   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 19:22:16.672376   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 19:22:16.676864   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 19:22:16.676922   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 19:22:16.682647   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 19:22:16.693083   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 19:22:16.703938   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:22:16.708263   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:22:16.708320   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:22:16.714520   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 19:22:16.725249   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 19:22:16.736315   31154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 19:22:16.741061   31154 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 19:22:16.741120   31154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 19:22:16.746697   31154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 19:22:16.757551   31154 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 19:22:16.761481   31154 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 19:22:16.761539   31154 kubeadm.go:934] updating node {m03 192.168.39.101 8443 v1.31.1 crio true true} ...
	I1001 19:22:16.761636   31154 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-193737-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 19:22:16.761666   31154 kube-vip.go:115] generating kube-vip config ...
	I1001 19:22:16.761704   31154 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 19:22:16.778682   31154 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 19:22:16.778755   31154 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1001 19:22:16.778825   31154 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 19:22:16.788174   31154 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1001 19:22:16.788258   31154 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1001 19:22:16.797330   31154 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1001 19:22:16.797360   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 19:22:16.797405   31154 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1001 19:22:16.797420   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 19:22:16.797425   31154 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1001 19:22:16.797452   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 19:22:16.797455   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 19:22:16.797515   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 19:22:16.806983   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1001 19:22:16.807016   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1001 19:22:16.807033   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1001 19:22:16.807064   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1001 19:22:16.822346   31154 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 19:22:16.822450   31154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 19:22:16.908222   31154 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1001 19:22:16.908266   31154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1001 19:22:17.718151   31154 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1001 19:22:17.728679   31154 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1001 19:22:17.753493   31154 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 19:22:17.773315   31154 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1001 19:22:17.791404   31154 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 19:22:17.795599   31154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:22:17.808083   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:22:17.928195   31154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:22:17.944678   31154 host.go:66] Checking if "ha-193737" exists ...
	I1001 19:22:17.945052   31154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:22:17.945093   31154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:22:17.962020   31154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45629
	I1001 19:22:17.962474   31154 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:22:17.962912   31154 main.go:141] libmachine: Using API Version  1
	I1001 19:22:17.962940   31154 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:22:17.963311   31154 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:22:17.963520   31154 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:22:17.963697   31154 start.go:317] joinCluster: &{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:22:17.963861   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1001 19:22:17.963886   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:22:17.967232   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:22:17.967827   31154 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:22:17.967856   31154 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:22:17.968135   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:22:17.968336   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:22:17.968495   31154 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:22:17.968659   31154 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:22:18.133596   31154 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:22:18.133651   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z7cdmg.hjk7kyt30ndw2tea --discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-193737-m03 --control-plane --apiserver-advertise-address=192.168.39.101 --apiserver-bind-port=8443"
	I1001 19:22:41.859086   31154 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z7cdmg.hjk7kyt30ndw2tea --discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-193737-m03 --control-plane --apiserver-advertise-address=192.168.39.101 --apiserver-bind-port=8443": (23.725407283s)
	I1001 19:22:41.859128   31154 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1001 19:22:42.384071   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-193737-m03 minikube.k8s.io/updated_at=2024_10_01T19_22_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=ha-193737 minikube.k8s.io/primary=false
	I1001 19:22:42.510669   31154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-193737-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1001 19:22:42.641492   31154 start.go:319] duration metric: took 24.67779185s to joinCluster
	I1001 19:22:42.641581   31154 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 19:22:42.641937   31154 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:22:42.642770   31154 out.go:177] * Verifying Kubernetes components...
	I1001 19:22:42.643798   31154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:22:42.883720   31154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:22:42.899372   31154 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:22:42.899626   31154 kapi.go:59] client config for ha-193737: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key", CAFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1001 19:22:42.899683   31154 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.14:8443
	I1001 19:22:42.899959   31154 node_ready.go:35] waiting up to 6m0s for node "ha-193737-m03" to be "Ready" ...
	I1001 19:22:42.900040   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:42.900052   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:42.900063   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:42.900071   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:42.904647   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:22:43.401126   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:43.401152   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:43.401163   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:43.401168   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:43.405027   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:43.900824   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:43.900848   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:43.900859   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:43.900868   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:43.904531   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:44.400251   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:44.400272   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:44.400281   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:44.400285   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:44.403517   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:44.901001   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:44.901028   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:44.901036   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:44.901041   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:44.905012   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:44.905575   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:45.400898   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:45.400924   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:45.400935   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:45.400942   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:45.405202   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:22:45.900749   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:45.900772   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:45.900781   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:45.900785   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:45.904505   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:46.400832   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:46.400855   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:46.400865   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:46.400871   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:46.404455   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:46.900834   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:46.900926   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:46.900945   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:46.900955   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:46.907848   31154 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1001 19:22:46.909060   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:47.400619   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:47.400639   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:47.400647   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:47.400651   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:47.404519   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:47.900808   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:47.900835   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:47.900846   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:47.900851   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:48.028121   31154 round_trippers.go:574] Response Status: 200 OK in 127 milliseconds
	I1001 19:22:48.400839   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:48.400859   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:48.400866   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:48.400870   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:48.404198   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:48.900508   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:48.900533   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:48.900544   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:48.900551   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:48.904379   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:49.400836   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:49.400857   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:49.400866   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:49.400870   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:49.403736   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:49.404256   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:49.901034   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:49.901058   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:49.901068   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:49.901073   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:49.905378   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:22:50.400178   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:50.400198   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:50.400206   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:50.400214   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:50.403269   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:50.901215   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:50.901242   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:50.901251   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:50.901256   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:50.905409   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:22:51.400867   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:51.400890   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:51.400899   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:51.400908   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:51.404516   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:51.404962   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:51.900265   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:51.900308   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:51.900315   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:51.900319   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:51.903634   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:52.401178   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:52.401200   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:52.401206   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:52.401211   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:52.404511   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:52.900412   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:52.900432   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:52.900441   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:52.900446   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:52.903570   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:53.400572   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:53.400602   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:53.400614   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:53.400622   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:53.403821   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:53.900178   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:53.900201   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:53.900210   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:53.900214   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:53.903933   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:53.904621   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:54.401040   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:54.401066   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:54.401078   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:54.401085   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:54.404732   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:54.901129   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:54.901154   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:54.901163   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:54.901166   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:54.904547   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:55.400669   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:55.400692   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:55.400700   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:55.400703   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:55.404556   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:55.900944   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:55.900966   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:55.900974   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:55.900977   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:55.904209   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:55.904851   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:56.400513   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:56.400537   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:56.400548   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:56.400554   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:56.403671   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:56.900541   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:56.900564   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:56.900575   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:56.900582   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:56.903726   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:57.400178   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:57.400200   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:57.400209   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:57.400216   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:57.403658   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:57.901131   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:57.901154   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:57.901163   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:57.901169   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:57.904387   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:58.401066   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:58.401087   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:58.401095   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:58.401098   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:58.404875   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:58.405329   31154 node_ready.go:53] node "ha-193737-m03" has status "Ready":"False"
	I1001 19:22:58.900140   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:58.900160   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:58.900168   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:58.900172   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:58.903081   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.401118   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:59.401143   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.401153   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.401156   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.404480   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:59.405079   31154 node_ready.go:49] node "ha-193737-m03" has status "Ready":"True"
	I1001 19:22:59.405100   31154 node_ready.go:38] duration metric: took 16.505122802s for node "ha-193737-m03" to be "Ready" ...
	I1001 19:22:59.405110   31154 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 19:22:59.405190   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:22:59.405207   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.405217   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.405227   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.412572   31154 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1001 19:22:59.420220   31154 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.420321   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hd5hv
	I1001 19:22:59.420334   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.420345   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.420353   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.423179   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.423949   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:22:59.423964   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.423970   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.423975   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.426304   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.426762   31154 pod_ready.go:93] pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace has status "Ready":"True"
	I1001 19:22:59.426780   31154 pod_ready.go:82] duration metric: took 6.530664ms for pod "coredns-7c65d6cfc9-hd5hv" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.426796   31154 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.426857   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-v2wsx
	I1001 19:22:59.426866   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.426876   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.426887   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.429141   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.429823   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:22:59.429840   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.429848   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.429852   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.431860   31154 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 19:22:59.432333   31154 pod_ready.go:93] pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace has status "Ready":"True"
	I1001 19:22:59.432348   31154 pod_ready.go:82] duration metric: took 5.544704ms for pod "coredns-7c65d6cfc9-v2wsx" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.432374   31154 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.432437   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737
	I1001 19:22:59.432448   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.432456   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.432459   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.434479   31154 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 19:22:59.435042   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:22:59.435057   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.435063   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.435067   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.437217   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.437787   31154 pod_ready.go:93] pod "etcd-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:22:59.437803   31154 pod_ready.go:82] duration metric: took 5.420394ms for pod "etcd-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.437813   31154 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.437864   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737-m02
	I1001 19:22:59.437874   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.437883   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.437892   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.440631   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:22:59.441277   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:22:59.441295   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.441316   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.441325   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.448195   31154 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1001 19:22:59.448905   31154 pod_ready.go:93] pod "etcd-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:22:59.448925   31154 pod_ready.go:82] duration metric: took 11.104591ms for pod "etcd-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.448938   31154 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.601259   31154 request.go:632] Waited for 152.231969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737-m03
	I1001 19:22:59.601316   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-ha-193737-m03
	I1001 19:22:59.601321   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.601329   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.601333   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.604878   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:59.801921   31154 request.go:632] Waited for 196.382761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:59.802008   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:22:59.802021   31154 round_trippers.go:469] Request Headers:
	I1001 19:22:59.802031   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:22:59.802037   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:22:59.805203   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:22:59.806083   31154 pod_ready.go:93] pod "etcd-ha-193737-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 19:22:59.806103   31154 pod_ready.go:82] duration metric: took 357.156614ms for pod "etcd-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:22:59.806134   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:00.001202   31154 request.go:632] Waited for 194.974996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737
	I1001 19:23:00.001255   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737
	I1001 19:23:00.001260   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:00.001267   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:00.001271   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:00.005307   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:23:00.201989   31154 request.go:632] Waited for 195.321685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:00.202114   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:00.202132   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:00.202146   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:00.202158   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:00.205788   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:00.206508   31154 pod_ready.go:93] pod "kube-apiserver-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:00.206529   31154 pod_ready.go:82] duration metric: took 400.381151ms for pod "kube-apiserver-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:00.206541   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:00.401602   31154 request.go:632] Waited for 194.993098ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m02
	I1001 19:23:00.401663   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m02
	I1001 19:23:00.401668   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:00.401676   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:00.401680   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:00.405450   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:00.601599   31154 request.go:632] Waited for 195.316962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:00.601692   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:00.601700   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:00.601707   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:00.601711   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:00.605188   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:00.605660   31154 pod_ready.go:93] pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:00.605679   31154 pod_ready.go:82] duration metric: took 399.130829ms for pod "kube-apiserver-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:00.605688   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:00.801836   31154 request.go:632] Waited for 196.081559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m03
	I1001 19:23:00.801903   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-193737-m03
	I1001 19:23:00.801908   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:00.801926   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:00.801931   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:00.805500   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.001996   31154 request.go:632] Waited for 195.706291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:01.002060   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:01.002068   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:01.002082   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:01.002090   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:01.005674   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.006438   31154 pod_ready.go:93] pod "kube-apiserver-ha-193737-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:01.006466   31154 pod_ready.go:82] duration metric: took 400.769669ms for pod "kube-apiserver-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:01.006480   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:01.201564   31154 request.go:632] Waited for 195.007953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737
	I1001 19:23:01.201618   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737
	I1001 19:23:01.201623   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:01.201630   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:01.201634   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:01.204998   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.402159   31154 request.go:632] Waited for 196.410696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:01.402225   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:01.402232   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:01.402243   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:01.402250   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:01.405639   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.406259   31154 pod_ready.go:93] pod "kube-controller-manager-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:01.406284   31154 pod_ready.go:82] duration metric: took 399.796485ms for pod "kube-controller-manager-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:01.406298   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:01.601556   31154 request.go:632] Waited for 195.171182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m02
	I1001 19:23:01.601629   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m02
	I1001 19:23:01.601638   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:01.601646   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:01.601655   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:01.605271   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.801581   31154 request.go:632] Waited for 195.404456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:01.801644   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:01.801651   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:01.801662   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:01.801669   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:01.805042   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:01.805673   31154 pod_ready.go:93] pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:01.805694   31154 pod_ready.go:82] duration metric: took 399.387622ms for pod "kube-controller-manager-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:01.805707   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:02.001904   31154 request.go:632] Waited for 195.994245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m03
	I1001 19:23:02.002040   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-193737-m03
	I1001 19:23:02.002064   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:02.002075   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:02.002080   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:02.005612   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:02.201553   31154 request.go:632] Waited for 195.185972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:02.201606   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:02.201612   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:02.201628   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:02.201645   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:02.205018   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:02.205533   31154 pod_ready.go:93] pod "kube-controller-manager-ha-193737-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:02.205552   31154 pod_ready.go:82] duration metric: took 399.838551ms for pod "kube-controller-manager-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:02.205563   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4294m" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:02.401983   31154 request.go:632] Waited for 196.357491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4294m
	I1001 19:23:02.402038   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4294m
	I1001 19:23:02.402043   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:02.402049   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:02.402054   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:02.405225   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:02.601208   31154 request.go:632] Waited for 195.289332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:02.601293   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:02.601304   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:02.601316   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:02.601328   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:02.604768   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:02.605212   31154 pod_ready.go:93] pod "kube-proxy-4294m" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:02.605230   31154 pod_ready.go:82] duration metric: took 399.66052ms for pod "kube-proxy-4294m" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:02.605242   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9pm4t" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:02.801359   31154 request.go:632] Waited for 196.035084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9pm4t
	I1001 19:23:02.801440   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9pm4t
	I1001 19:23:02.801448   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:02.801462   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:02.801473   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:02.804772   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:03.001444   31154 request.go:632] Waited for 196.042411ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:03.001517   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:03.001522   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:03.001536   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:03.001543   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:03.005199   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:03.005738   31154 pod_ready.go:93] pod "kube-proxy-9pm4t" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:03.005763   31154 pod_ready.go:82] duration metric: took 400.510951ms for pod "kube-proxy-9pm4t" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:03.005773   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zpsll" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:03.201543   31154 request.go:632] Waited for 195.704518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zpsll
	I1001 19:23:03.201618   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zpsll
	I1001 19:23:03.201627   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:03.201634   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:03.201639   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:03.204535   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:23:03.401528   31154 request.go:632] Waited for 196.292025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:03.401585   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:03.401590   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:03.401597   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:03.401602   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:03.405338   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:03.406008   31154 pod_ready.go:93] pod "kube-proxy-zpsll" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:03.406025   31154 pod_ready.go:82] duration metric: took 400.246215ms for pod "kube-proxy-zpsll" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:03.406035   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:03.601668   31154 request.go:632] Waited for 195.548834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737
	I1001 19:23:03.601752   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737
	I1001 19:23:03.601760   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:03.601772   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:03.601779   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:03.605345   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:03.801308   31154 request.go:632] Waited for 195.294104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:03.801403   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737
	I1001 19:23:03.801417   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:03.801427   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:03.801434   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:03.804468   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:03.805276   31154 pod_ready.go:93] pod "kube-scheduler-ha-193737" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:03.805293   31154 pod_ready.go:82] duration metric: took 399.251767ms for pod "kube-scheduler-ha-193737" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:03.805303   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:04.001445   31154 request.go:632] Waited for 196.067713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m02
	I1001 19:23:04.001522   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m02
	I1001 19:23:04.001531   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.001541   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.001548   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.004705   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:04.201792   31154 request.go:632] Waited for 196.362451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:04.201872   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m02
	I1001 19:23:04.201879   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.201889   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.201897   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.205376   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:04.206212   31154 pod_ready.go:93] pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:04.206235   31154 pod_ready.go:82] duration metric: took 400.923668ms for pod "kube-scheduler-ha-193737-m02" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:04.206250   31154 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:04.401166   31154 request.go:632] Waited for 194.837724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m03
	I1001 19:23:04.401244   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-193737-m03
	I1001 19:23:04.401252   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.401266   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.401273   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.404292   31154 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 19:23:04.601244   31154 request.go:632] Waited for 196.299344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:04.601300   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/ha-193737-m03
	I1001 19:23:04.601306   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.601313   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.601317   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.604470   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:04.605038   31154 pod_ready.go:93] pod "kube-scheduler-ha-193737-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 19:23:04.605055   31154 pod_ready.go:82] duration metric: took 398.796981ms for pod "kube-scheduler-ha-193737-m03" in "kube-system" namespace to be "Ready" ...
	I1001 19:23:04.605065   31154 pod_ready.go:39] duration metric: took 5.199943212s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 19:23:04.605079   31154 api_server.go:52] waiting for apiserver process to appear ...
	I1001 19:23:04.605144   31154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 19:23:04.623271   31154 api_server.go:72] duration metric: took 21.981652881s to wait for apiserver process to appear ...
	I1001 19:23:04.623293   31154 api_server.go:88] waiting for apiserver healthz status ...
	I1001 19:23:04.623314   31154 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I1001 19:23:04.631212   31154 api_server.go:279] https://192.168.39.14:8443/healthz returned 200:
	ok
	I1001 19:23:04.631285   31154 round_trippers.go:463] GET https://192.168.39.14:8443/version
	I1001 19:23:04.631295   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.631303   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.631310   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.632155   31154 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1001 19:23:04.632226   31154 api_server.go:141] control plane version: v1.31.1
	I1001 19:23:04.632243   31154 api_server.go:131] duration metric: took 8.942184ms to wait for apiserver health ...
	I1001 19:23:04.632254   31154 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 19:23:04.801981   31154 request.go:632] Waited for 169.64915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:23:04.802068   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:23:04.802079   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:04.802090   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:04.802102   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:04.809502   31154 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1001 19:23:04.815901   31154 system_pods.go:59] 24 kube-system pods found
	I1001 19:23:04.815930   31154 system_pods.go:61] "coredns-7c65d6cfc9-hd5hv" [31f0afff-5571-46d6-888f-8982c71ba191] Running
	I1001 19:23:04.815935   31154 system_pods.go:61] "coredns-7c65d6cfc9-v2wsx" [8e3dd318-5017-4ada-bf2f-61b640ee2030] Running
	I1001 19:23:04.815939   31154 system_pods.go:61] "etcd-ha-193737" [99c3674d-50d6-4160-89dd-afb3c2e71039] Running
	I1001 19:23:04.815943   31154 system_pods.go:61] "etcd-ha-193737-m02" [a541b9c8-c10e-45b4-ac9c-d16e8bf659b0] Running
	I1001 19:23:04.815946   31154 system_pods.go:61] "etcd-ha-193737-m03" [de61043b-ff4c-4d28-ab01-d63abf25ef30] Running
	I1001 19:23:04.815949   31154 system_pods.go:61] "kindnet-bqht8" [3cef1863-ae14-4ab4-bc4f-5545e058cc9c] Running
	I1001 19:23:04.815953   31154 system_pods.go:61] "kindnet-drdlr" [13177890-a0eb-47ff-8fe2-585992810e47] Running
	I1001 19:23:04.815955   31154 system_pods.go:61] "kindnet-wnr6g" [89e11419-0c5c-486e-bdbf-eaf6fab1e62c] Running
	I1001 19:23:04.815958   31154 system_pods.go:61] "kube-apiserver-ha-193737" [432bf6a6-4607-4f55-a026-d910075d3145] Running
	I1001 19:23:04.815961   31154 system_pods.go:61] "kube-apiserver-ha-193737-m02" [379a24af-2148-4870-9f4b-25ad48943445] Running
	I1001 19:23:04.815964   31154 system_pods.go:61] "kube-apiserver-ha-193737-m03" [fbf7fbec-142d-4402-9bcc-c3e25e11ac2e] Running
	I1001 19:23:04.815968   31154 system_pods.go:61] "kube-controller-manager-ha-193737" [95fb44db-8cb3-4db9-8a84-ab82e15760de] Running
	I1001 19:23:04.815971   31154 system_pods.go:61] "kube-controller-manager-ha-193737-m02" [5b0821e0-673a-440c-8ca1-fefbfe913b94] Running
	I1001 19:23:04.815974   31154 system_pods.go:61] "kube-controller-manager-ha-193737-m03" [fd854d14-6abb-42eb-b560-e816e86c6767] Running
	I1001 19:23:04.815981   31154 system_pods.go:61] "kube-proxy-4294m" [f454ca0f-d662-4dfd-ab77-ebccaf3a6b12] Running
	I1001 19:23:04.815987   31154 system_pods.go:61] "kube-proxy-9pm4t" [5dba191b-ba4a-4a22-80df-65afd1dcbfb5] Running
	I1001 19:23:04.815989   31154 system_pods.go:61] "kube-proxy-zpsll" [c18fec3c-2880-4860-b220-a44d5e523bed] Running
	I1001 19:23:04.815998   31154 system_pods.go:61] "kube-scheduler-ha-193737" [46ca2b37-6145-4111-9088-bcf51307be3a] Running
	I1001 19:23:04.816002   31154 system_pods.go:61] "kube-scheduler-ha-193737-m02" [72be6cd5-d226-46d6-b675-99014e544dfb] Running
	I1001 19:23:04.816005   31154 system_pods.go:61] "kube-scheduler-ha-193737-m03" [129167e7-febe-4de3-a35f-3f0e668c7a77] Running
	I1001 19:23:04.816008   31154 system_pods.go:61] "kube-vip-ha-193737" [cbe8e6a4-08f3-4db3-af4d-810a5592597c] Running
	I1001 19:23:04.816014   31154 system_pods.go:61] "kube-vip-ha-193737-m02" [da7dfa59-1b27-45ce-9e61-ad8da40ec548] Running
	I1001 19:23:04.816017   31154 system_pods.go:61] "kube-vip-ha-193737-m03" [7a9bbd2f-8b9a-4104-baf4-11efdd662028] Running
	I1001 19:23:04.816022   31154 system_pods.go:61] "storage-provisioner" [d5b587a6-418b-47e5-9bf7-3fb6fa5e3372] Running
	I1001 19:23:04.816027   31154 system_pods.go:74] duration metric: took 183.765578ms to wait for pod list to return data ...
	I1001 19:23:04.816036   31154 default_sa.go:34] waiting for default service account to be created ...
	I1001 19:23:05.001464   31154 request.go:632] Waited for 185.352635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
	I1001 19:23:05.001522   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
	I1001 19:23:05.001527   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:05.001534   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:05.001538   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:05.005437   31154 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 19:23:05.005559   31154 default_sa.go:45] found service account: "default"
	I1001 19:23:05.005576   31154 default_sa.go:55] duration metric: took 189.530453ms for default service account to be created ...
	I1001 19:23:05.005589   31154 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 19:23:05.201939   31154 request.go:632] Waited for 196.276664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:23:05.201999   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I1001 19:23:05.202009   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:05.202018   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:05.202026   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:05.208844   31154 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1001 19:23:05.215522   31154 system_pods.go:86] 24 kube-system pods found
	I1001 19:23:05.215551   31154 system_pods.go:89] "coredns-7c65d6cfc9-hd5hv" [31f0afff-5571-46d6-888f-8982c71ba191] Running
	I1001 19:23:05.215559   31154 system_pods.go:89] "coredns-7c65d6cfc9-v2wsx" [8e3dd318-5017-4ada-bf2f-61b640ee2030] Running
	I1001 19:23:05.215563   31154 system_pods.go:89] "etcd-ha-193737" [99c3674d-50d6-4160-89dd-afb3c2e71039] Running
	I1001 19:23:05.215567   31154 system_pods.go:89] "etcd-ha-193737-m02" [a541b9c8-c10e-45b4-ac9c-d16e8bf659b0] Running
	I1001 19:23:05.215570   31154 system_pods.go:89] "etcd-ha-193737-m03" [de61043b-ff4c-4d28-ab01-d63abf25ef30] Running
	I1001 19:23:05.215574   31154 system_pods.go:89] "kindnet-bqht8" [3cef1863-ae14-4ab4-bc4f-5545e058cc9c] Running
	I1001 19:23:05.215578   31154 system_pods.go:89] "kindnet-drdlr" [13177890-a0eb-47ff-8fe2-585992810e47] Running
	I1001 19:23:05.215581   31154 system_pods.go:89] "kindnet-wnr6g" [89e11419-0c5c-486e-bdbf-eaf6fab1e62c] Running
	I1001 19:23:05.215584   31154 system_pods.go:89] "kube-apiserver-ha-193737" [432bf6a6-4607-4f55-a026-d910075d3145] Running
	I1001 19:23:05.215588   31154 system_pods.go:89] "kube-apiserver-ha-193737-m02" [379a24af-2148-4870-9f4b-25ad48943445] Running
	I1001 19:23:05.215591   31154 system_pods.go:89] "kube-apiserver-ha-193737-m03" [fbf7fbec-142d-4402-9bcc-c3e25e11ac2e] Running
	I1001 19:23:05.215595   31154 system_pods.go:89] "kube-controller-manager-ha-193737" [95fb44db-8cb3-4db9-8a84-ab82e15760de] Running
	I1001 19:23:05.215598   31154 system_pods.go:89] "kube-controller-manager-ha-193737-m02" [5b0821e0-673a-440c-8ca1-fefbfe913b94] Running
	I1001 19:23:05.215601   31154 system_pods.go:89] "kube-controller-manager-ha-193737-m03" [fd854d14-6abb-42eb-b560-e816e86c6767] Running
	I1001 19:23:05.215603   31154 system_pods.go:89] "kube-proxy-4294m" [f454ca0f-d662-4dfd-ab77-ebccaf3a6b12] Running
	I1001 19:23:05.215606   31154 system_pods.go:89] "kube-proxy-9pm4t" [5dba191b-ba4a-4a22-80df-65afd1dcbfb5] Running
	I1001 19:23:05.215609   31154 system_pods.go:89] "kube-proxy-zpsll" [c18fec3c-2880-4860-b220-a44d5e523bed] Running
	I1001 19:23:05.215613   31154 system_pods.go:89] "kube-scheduler-ha-193737" [46ca2b37-6145-4111-9088-bcf51307be3a] Running
	I1001 19:23:05.215616   31154 system_pods.go:89] "kube-scheduler-ha-193737-m02" [72be6cd5-d226-46d6-b675-99014e544dfb] Running
	I1001 19:23:05.215621   31154 system_pods.go:89] "kube-scheduler-ha-193737-m03" [129167e7-febe-4de3-a35f-3f0e668c7a77] Running
	I1001 19:23:05.215626   31154 system_pods.go:89] "kube-vip-ha-193737" [cbe8e6a4-08f3-4db3-af4d-810a5592597c] Running
	I1001 19:23:05.215630   31154 system_pods.go:89] "kube-vip-ha-193737-m02" [da7dfa59-1b27-45ce-9e61-ad8da40ec548] Running
	I1001 19:23:05.215634   31154 system_pods.go:89] "kube-vip-ha-193737-m03" [7a9bbd2f-8b9a-4104-baf4-11efdd662028] Running
	I1001 19:23:05.215639   31154 system_pods.go:89] "storage-provisioner" [d5b587a6-418b-47e5-9bf7-3fb6fa5e3372] Running
	I1001 19:23:05.215647   31154 system_pods.go:126] duration metric: took 210.049347ms to wait for k8s-apps to be running ...
	I1001 19:23:05.215659   31154 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 19:23:05.215714   31154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 19:23:05.232730   31154 system_svc.go:56] duration metric: took 17.059785ms WaitForService to wait for kubelet
	I1001 19:23:05.232757   31154 kubeadm.go:582] duration metric: took 22.59114375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 19:23:05.232773   31154 node_conditions.go:102] verifying NodePressure condition ...
	I1001 19:23:05.401103   31154 request.go:632] Waited for 168.256226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes
	I1001 19:23:05.401154   31154 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes
	I1001 19:23:05.401159   31154 round_trippers.go:469] Request Headers:
	I1001 19:23:05.401165   31154 round_trippers.go:473]     Accept: application/json, */*
	I1001 19:23:05.401169   31154 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 19:23:05.405382   31154 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 19:23:05.406740   31154 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 19:23:05.406763   31154 node_conditions.go:123] node cpu capacity is 2
	I1001 19:23:05.406777   31154 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 19:23:05.406783   31154 node_conditions.go:123] node cpu capacity is 2
	I1001 19:23:05.406789   31154 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 19:23:05.406794   31154 node_conditions.go:123] node cpu capacity is 2
	I1001 19:23:05.406799   31154 node_conditions.go:105] duration metric: took 174.020761ms to run NodePressure ...
	I1001 19:23:05.406816   31154 start.go:241] waiting for startup goroutines ...
	I1001 19:23:05.406842   31154 start.go:255] writing updated cluster config ...
	I1001 19:23:05.407176   31154 ssh_runner.go:195] Run: rm -f paused
	I1001 19:23:05.459358   31154 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 19:23:05.461856   31154 out.go:177] * Done! kubectl is now configured to use "ha-193737" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.466127305Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8db4ba94-896d-44f6-89f9-082f567b09a0 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.467390838Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e82e8dd7-6b55-4219-9cf7-7563931db22f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.467886600Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810825467861610,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e82e8dd7-6b55-4219-9cf7-7563931db22f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.468592133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8edcb5d3-df72-41cb-ab0c-efa3eb6d6c81 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.468659110Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8edcb5d3-df72-41cb-ab0c-efa3eb6d6c81 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.468935200Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727810590371295276,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75485355206ed8610939d128b62d0d55ec84cbb8615f7261469f854e52b6447d,PodSandboxId:7ea8efe8e5b7908a13d9ad3be6c8e7a9e871b332f5889126ea13429055eaaed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727810449416160580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449360614251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449354501899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-50
17-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278104
37213853474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727810437061931545,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c962c4138a0019c76f981b72e8948efd386403002bb686f7e70c38bc20c3d542,PodSandboxId:cb787d15fa3b89fa5c4a479def2aed57fb03a1d1abdf07f6170488ae8f5b774f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727810427447788769,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cf6ac3eb69fe181eb29ee323afb176,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727810424745051374,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727810424759083818,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c57920320eb65f4477cbff8aec818a7ecddb3461a77ca111172e4d1a5f7e71,PodSandboxId:f74fa319889b0624f5157b5a226af0e5a24f37f6922332d6e0037af95aa50aef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727810424668320387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9d05172b801a89ec36c0d0d549bfeee1aafb67f6586e3f4f163cb63f01d062,PodSandboxId:d6e9deea0a8069c598634405f46a9fa565d1d0b888716c0c94e4ed5b44588a10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727810424633610117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8edcb5d3-df72-41cb-ab0c-efa3eb6d6c81 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.505648951Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d86fd748-d33c-418c-9e93-fedfbbd02a0b name=/runtime.v1.RuntimeService/Version
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.505850840Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d86fd748-d33c-418c-9e93-fedfbbd02a0b name=/runtime.v1.RuntimeService/Version
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.506970316Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=301fd12c-638b-4194-ac31-64ef9409b5ae name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.507394525Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810825507369741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=301fd12c-638b-4194-ac31-64ef9409b5ae name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.508138315Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b39315db-0f4f-4f9a-9848-d3774bec79b6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.508209245Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b39315db-0f4f-4f9a-9848-d3774bec79b6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.508440012Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727810590371295276,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75485355206ed8610939d128b62d0d55ec84cbb8615f7261469f854e52b6447d,PodSandboxId:7ea8efe8e5b7908a13d9ad3be6c8e7a9e871b332f5889126ea13429055eaaed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727810449416160580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449360614251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449354501899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-50
17-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278104
37213853474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727810437061931545,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c962c4138a0019c76f981b72e8948efd386403002bb686f7e70c38bc20c3d542,PodSandboxId:cb787d15fa3b89fa5c4a479def2aed57fb03a1d1abdf07f6170488ae8f5b774f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727810427447788769,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cf6ac3eb69fe181eb29ee323afb176,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727810424745051374,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727810424759083818,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c57920320eb65f4477cbff8aec818a7ecddb3461a77ca111172e4d1a5f7e71,PodSandboxId:f74fa319889b0624f5157b5a226af0e5a24f37f6922332d6e0037af95aa50aef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727810424668320387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9d05172b801a89ec36c0d0d549bfeee1aafb67f6586e3f4f163cb63f01d062,PodSandboxId:d6e9deea0a8069c598634405f46a9fa565d1d0b888716c0c94e4ed5b44588a10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727810424633610117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b39315db-0f4f-4f9a-9848-d3774bec79b6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.517216481Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=4c82a4bd-3fdb-4370-b375-8c16ffb76e4c name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.517481532Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-rbjkx,Uid:ba3ecbe1-fb88-4674-b679-a442b28cd68e,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727810586682758033,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-01T19:23:06.356548410Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7ea8efe8e5b7908a13d9ad3be6c8e7a9e871b332f5889126ea13429055eaaed3,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1727810449150584704,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-01T19:20:48.833089109Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-v2wsx,Uid:8e3dd318-5017-4ada-bf2f-61b640ee2030,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727810449146909574,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-5017-4ada-bf2f-61b640ee2030,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-01T19:20:48.833790629Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-hd5hv,Uid:31f0afff-5571-46d6-888f-8982c71ba191,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1727810449136545895,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-01T19:20:48.824987880Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&PodSandboxMetadata{Name:kindnet-wnr6g,Uid:89e11419-0c5c-486e-bdbf-eaf6fab1e62c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727810436813914354,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-10-01T19:20:35.888519006Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&PodSandboxMetadata{Name:kube-proxy-zpsll,Uid:c18fec3c-2880-4860-b220-a44d5e523bed,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727810436811137861,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-01T19:20:35.894320364Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cb787d15fa3b89fa5c4a479def2aed57fb03a1d1abdf07f6170488ae8f5b774f,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-193737,Uid:00cf6ac3eb69fe181eb29ee323afb176,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1727810424463689607,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cf6ac3eb69fe181eb29ee323afb176,},Annotations:map[string]string{kubernetes.io/config.hash: 00cf6ac3eb69fe181eb29ee323afb176,kubernetes.io/config.seen: 2024-10-01T19:20:23.971420116Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f74fa319889b0624f5157b5a226af0e5a24f37f6922332d6e0037af95aa50aef,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-193737,Uid:26cd510d04d444e2a3fd26699f0dbb26,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727810424458185869,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apis
erver.advertise-address.endpoint: 192.168.39.14:8443,kubernetes.io/config.hash: 26cd510d04d444e2a3fd26699f0dbb26,kubernetes.io/config.seen: 2024-10-01T19:20:23.971416640Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-193737,Uid:0322ee97040a2f569785dff412cf907f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727810424450474160,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0322ee97040a2f569785dff412cf907f,kubernetes.io/config.seen: 2024-10-01T19:20:23.971419282Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d6e9deea0a8069c598634405f46a9fa565d1d0b888716c0c94e4ed5b44588a10,Meta
data:&PodSandboxMetadata{Name:kube-controller-manager-ha-193737,Uid:de600bfbca1d9c3f01fa833eb2f872cd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727810424450223399,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: de600bfbca1d9c3f01fa833eb2f872cd,kubernetes.io/config.seen: 2024-10-01T19:20:23.971418215Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&PodSandboxMetadata{Name:etcd-ha-193737,Uid:b7769b1af58540331dfe5effd67e84a0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727810424434200231,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-193737,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.14:2379,kubernetes.io/config.hash: b7769b1af58540331dfe5effd67e84a0,kubernetes.io/config.seen: 2024-10-01T19:20:23.971412372Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=4c82a4bd-3fdb-4370-b375-8c16ffb76e4c name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.518273560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c1072b8-1b2c-4f51-bd6e-ae4fe5a32249 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.518327312Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c1072b8-1b2c-4f51-bd6e-ae4fe5a32249 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.518567818Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727810590371295276,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75485355206ed8610939d128b62d0d55ec84cbb8615f7261469f854e52b6447d,PodSandboxId:7ea8efe8e5b7908a13d9ad3be6c8e7a9e871b332f5889126ea13429055eaaed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727810449416160580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449360614251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449354501899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-50
17-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278104
37213853474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727810437061931545,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c962c4138a0019c76f981b72e8948efd386403002bb686f7e70c38bc20c3d542,PodSandboxId:cb787d15fa3b89fa5c4a479def2aed57fb03a1d1abdf07f6170488ae8f5b774f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727810427447788769,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cf6ac3eb69fe181eb29ee323afb176,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727810424745051374,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727810424759083818,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c57920320eb65f4477cbff8aec818a7ecddb3461a77ca111172e4d1a5f7e71,PodSandboxId:f74fa319889b0624f5157b5a226af0e5a24f37f6922332d6e0037af95aa50aef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727810424668320387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9d05172b801a89ec36c0d0d549bfeee1aafb67f6586e3f4f163cb63f01d062,PodSandboxId:d6e9deea0a8069c598634405f46a9fa565d1d0b888716c0c94e4ed5b44588a10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727810424633610117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c1072b8-1b2c-4f51-bd6e-ae4fe5a32249 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.548282378Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9e6402c7-e122-40c0-90c8-c10a517a53de name=/runtime.v1.RuntimeService/Version
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.548370972Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9e6402c7-e122-40c0-90c8-c10a517a53de name=/runtime.v1.RuntimeService/Version
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.549466672Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97158743-13f7-43f6-b648-ec076a3b05e4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.549918550Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810825549893566,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97158743-13f7-43f6-b648-ec076a3b05e4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.550341786Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ba70b19-fd69-4c13-8ba2-50b88c5090a7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.550407914Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ba70b19-fd69-4c13-8ba2-50b88c5090a7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:27:05 ha-193737 crio[661]: time="2024-10-01 19:27:05.550656531Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727810590371295276,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75485355206ed8610939d128b62d0d55ec84cbb8615f7261469f854e52b6447d,PodSandboxId:7ea8efe8e5b7908a13d9ad3be6c8e7a9e871b332f5889126ea13429055eaaed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727810449416160580,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449360614251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727810449354501899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-50
17-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278104
37213853474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727810437061931545,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c962c4138a0019c76f981b72e8948efd386403002bb686f7e70c38bc20c3d542,PodSandboxId:cb787d15fa3b89fa5c4a479def2aed57fb03a1d1abdf07f6170488ae8f5b774f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727810427447788769,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00cf6ac3eb69fe181eb29ee323afb176,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727810424745051374,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727810424759083818,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c57920320eb65f4477cbff8aec818a7ecddb3461a77ca111172e4d1a5f7e71,PodSandboxId:f74fa319889b0624f5157b5a226af0e5a24f37f6922332d6e0037af95aa50aef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727810424668320387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9d05172b801a89ec36c0d0d549bfeee1aafb67f6586e3f4f163cb63f01d062,PodSandboxId:d6e9deea0a8069c598634405f46a9fa565d1d0b888716c0c94e4ed5b44588a10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727810424633610117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ba70b19-fd69-4c13-8ba2-50b88c5090a7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d523f1298c385       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   8ddf36dc2effd       busybox-7dff88458-rbjkx
	75485355206ed       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   7ea8efe8e5b79       storage-provisioner
	b9a32cfd9baec       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   b4ab4980fd9c6       coredns-7c65d6cfc9-hd5hv
	c598f8345f1d8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   69e4ceb6e3399       coredns-7c65d6cfc9-v2wsx
	25b91984e532b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   f7fcfb918d1fd       kindnet-wnr6g
	6ce5a1ca06729       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   65474abfbeabf       kube-proxy-zpsll
	c962c4138a001       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   cb787d15fa3b8       kube-vip-ha-193737
	7092a3841df08       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   c74bc4df7851a       etcd-ha-193737
	d7d722793679c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   4873897c8ffd7       kube-scheduler-ha-193737
	d2c57920320eb       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   f74fa319889b0       kube-apiserver-ha-193737
	fc9d05172b801       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   d6e9deea0a806       kube-controller-manager-ha-193737
	
	
	==> coredns [b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3] <==
	[INFO] 10.244.1.2:43526 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.003536908s
	[INFO] 10.244.1.2:59594 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.012224538s
	[INFO] 10.244.2.2:37785 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000112105s
	[INFO] 10.244.0.4:34398 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000118394s
	[INFO] 10.244.0.4:35218 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001965777s
	[INFO] 10.244.1.2:56827 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00018086s
	[INFO] 10.244.1.2:50439 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003922693s
	[INFO] 10.244.2.2:33611 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123417s
	[INFO] 10.244.2.2:37877 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204398s
	[INFO] 10.244.2.2:42894 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164711s
	[INFO] 10.244.0.4:58512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012749s
	[INFO] 10.244.0.4:60496 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126088s
	[INFO] 10.244.0.4:42876 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054151s
	[INFO] 10.244.0.4:46048 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001023388s
	[INFO] 10.244.0.4:45307 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069619s
	[INFO] 10.244.0.4:54830 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086737s
	[INFO] 10.244.1.2:56566 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104818s
	[INFO] 10.244.2.2:44960 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017462s
	[INFO] 10.244.2.2:35520 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000147677s
	[INFO] 10.244.0.4:34887 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000089068s
	[INFO] 10.244.0.4:47038 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093137s
	[INFO] 10.244.1.2:44935 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000181924s
	[INFO] 10.244.2.2:51593 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000184246s
	[INFO] 10.244.2.2:37070 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101666s
	[INFO] 10.244.0.4:49420 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115127s
	
	
	==> coredns [c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a] <==
	[INFO] 10.244.1.2:42880 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000139838s
	[INFO] 10.244.1.2:41832 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000162686s
	[INFO] 10.244.1.2:46697 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110911s
	[INFO] 10.244.2.2:37495 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001830157s
	[INFO] 10.244.2.2:39183 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155283s
	[INFO] 10.244.2.2:47614 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170182s
	[INFO] 10.244.2.2:52937 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001095974s
	[INFO] 10.244.2.2:59751 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106474s
	[INFO] 10.244.0.4:55786 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001514187s
	[INFO] 10.244.0.4:56387 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000050769s
	[INFO] 10.244.1.2:54787 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013733s
	[INFO] 10.244.1.2:58281 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113165s
	[INFO] 10.244.1.2:48712 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097722s
	[INFO] 10.244.2.2:57237 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152523s
	[INFO] 10.244.2.2:47314 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106445s
	[INFO] 10.244.0.4:43887 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000199016s
	[INFO] 10.244.0.4:49901 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000240769s
	[INFO] 10.244.1.2:54100 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210259s
	[INFO] 10.244.1.2:60342 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000221646s
	[INFO] 10.244.1.2:33783 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000165277s
	[INFO] 10.244.2.2:45378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197846s
	[INFO] 10.244.2.2:33324 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101556s
	[INFO] 10.244.0.4:40016 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000071122s
	[INFO] 10.244.0.4:40114 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135338s
	[INFO] 10.244.0.4:53904 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00006854s
	
	
	==> describe nodes <==
	Name:               ha-193737
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-193737
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=ha-193737
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T19_20_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:20:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-193737
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:27:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:23:34 +0000   Tue, 01 Oct 2024 19:20:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:23:34 +0000   Tue, 01 Oct 2024 19:20:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:23:34 +0000   Tue, 01 Oct 2024 19:20:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:23:34 +0000   Tue, 01 Oct 2024 19:20:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    ha-193737
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 008c1ccd624b4ab3b90055ff9f65b018
	  System UUID:                008c1ccd-624b-4ab3-b900-55ff9f65b018
	  Boot ID:                    ad12c9f1-7a18-4d35-9ec9-00d91da3365b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rbjkx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 coredns-7c65d6cfc9-hd5hv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m29s
	  kube-system                 coredns-7c65d6cfc9-v2wsx             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m29s
	  kube-system                 etcd-ha-193737                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m34s
	  kube-system                 kindnet-wnr6g                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m30s
	  kube-system                 kube-apiserver-ha-193737             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-controller-manager-ha-193737    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-proxy-zpsll                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-scheduler-ha-193737             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-vip-ha-193737                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m28s                  kube-proxy       
	  Normal  Starting                 6m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     6m41s (x7 over 6m42s)  kubelet          Node ha-193737 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m41s (x8 over 6m42s)  kubelet          Node ha-193737 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m41s (x8 over 6m42s)  kubelet          Node ha-193737 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  6m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m35s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m34s                  kubelet          Node ha-193737 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s                  kubelet          Node ha-193737 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s                  kubelet          Node ha-193737 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m30s                  node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	  Normal  NodeReady                6m17s                  kubelet          Node ha-193737 status is now: NodeReady
	  Normal  RegisteredNode           5m34s                  node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	
	
	Name:               ha-193737-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-193737-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=ha-193737
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T19_21_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:21:23 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-193737-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:24:17 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 01 Oct 2024 19:23:25 +0000   Tue, 01 Oct 2024 19:25:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 01 Oct 2024 19:23:25 +0000   Tue, 01 Oct 2024 19:25:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 01 Oct 2024 19:23:25 +0000   Tue, 01 Oct 2024 19:25:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 01 Oct 2024 19:23:25 +0000   Tue, 01 Oct 2024 19:25:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-193737-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e20c76476d7c4acaa5fd75e5b8fa3bab
	  System UUID:                e20c7647-6d7c-4aca-a5fd-75e5b8fa3bab
	  Boot ID:                    6ae84c19-5df4-457f-b75c-eae86d5e0ee1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fz5bb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 etcd-ha-193737-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m40s
	  kube-system                 kindnet-drdlr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m42s
	  kube-system                 kube-apiserver-ha-193737-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-controller-manager-ha-193737-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-proxy-4294m                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	  kube-system                 kube-scheduler-ha-193737-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-vip-ha-193737-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m38s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m42s (x8 over 5m42s)  kubelet          Node ha-193737-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m42s (x8 over 5m42s)  kubelet          Node ha-193737-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m42s (x7 over 5m42s)  kubelet          Node ha-193737-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m40s                  node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	  Normal  RegisteredNode           5m34s                  node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	  Normal  NodeNotReady             2m5s                   node-controller  Node ha-193737-m02 status is now: NodeNotReady
	
	
	Name:               ha-193737-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-193737-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=ha-193737
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T19_22_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:22:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-193737-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:27:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:23:39 +0000   Tue, 01 Oct 2024 19:22:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:23:39 +0000   Tue, 01 Oct 2024 19:22:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:23:39 +0000   Tue, 01 Oct 2024 19:22:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:23:39 +0000   Tue, 01 Oct 2024 19:22:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-193737-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f175e16bf19e4217880e926a75ac0065
	  System UUID:                f175e16b-f19e-4217-880e-926a75ac0065
	  Boot ID:                    5dc1c664-a01d-46eb-a066-a1970597b392
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qzzzv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 etcd-ha-193737-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m25s
	  kube-system                 kindnet-bqht8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m27s
	  kube-system                 kube-apiserver-ha-193737-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-controller-manager-ha-193737-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-proxy-9pm4t                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-scheduler-ha-193737-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-vip-ha-193737-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m23s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m27s (x8 over 4m27s)  kubelet          Node ha-193737-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s (x8 over 4m27s)  kubelet          Node ha-193737-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m27s (x7 over 4m27s)  kubelet          Node ha-193737-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m25s                  node-controller  Node ha-193737-m03 event: Registered Node ha-193737-m03 in Controller
	  Normal  RegisteredNode           4m24s                  node-controller  Node ha-193737-m03 event: Registered Node ha-193737-m03 in Controller
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-193737-m03 event: Registered Node ha-193737-m03 in Controller
	
	
	Name:               ha-193737-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-193737-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=ha-193737
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T19_23_47_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:23:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-193737-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:27:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:24:17 +0000   Tue, 01 Oct 2024 19:23:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:24:17 +0000   Tue, 01 Oct 2024 19:23:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:24:17 +0000   Tue, 01 Oct 2024 19:23:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:24:17 +0000   Tue, 01 Oct 2024 19:24:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.152
	  Hostname:    ha-193737-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef1097b5e0604ff19d7361f2921010b9
	  System UUID:                ef1097b5-e060-4ff1-9d73-61f2921010b9
	  Boot ID:                    e616be63-4a8a-41b8-a0fc-2b1d892a1200
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-h886q       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m20s
	  kube-system                 kube-proxy-hz2nn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m20s (x3 over 3m20s)  kubelet          Node ha-193737-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m20s (x3 over 3m20s)  kubelet          Node ha-193737-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m20s (x3 over 3m20s)  kubelet          Node ha-193737-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m16s                  node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal  NodeReady                3m                     kubelet          Node ha-193737-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 1 19:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050773] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037054] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.754509] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.921161] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Oct 1 19:20] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.804167] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.059657] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065329] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.157689] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.148971] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.256595] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +3.897654] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +5.026995] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.059544] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.061605] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.119912] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.150839] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.375138] kauditd_printk_skb: 38 callbacks suppressed
	[Oct 1 19:21] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e] <==
	{"level":"warn","ts":"2024-10-01T19:27:05.815793Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:27:05.819952Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:27:05.830916Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:27:05.838326Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:27:05.845905Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:27:05.849497Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:27:05.857584Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:27:05.859772Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:27:05.874078Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.27:2380/version","remote-member-id":"be719cfe4c1d88a","error":"Get \"https://192.168.39.27:2380/version\": dial tcp 192.168.39.27:2380: i/o timeout"}
	{"level":"warn","ts":"2024-10-01T19:27:05.874213Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"be719cfe4c1d88a","error":"Get \"https://192.168.39.27:2380/version\": dial tcp 192.168.39.27:2380: i/o timeout"}
	{"level":"warn","ts":"2024-10-01T19:27:05.927603Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:27:05.935148Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:27:05.942467Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:27:05.947011Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:27:05.952048Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:27:05.959777Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:27:05.961672Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:27:05.968475Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:27:05.976477Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:27:05.980881Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:27:05.984457Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:27:05.987935Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:27:05.993814Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:27:06.000376Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:27:06.058957Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:27:06 up 7 min,  0 users,  load average: 0.37, 0.33, 0.18
	Linux ha-193737 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525] <==
	I1001 19:26:28.354480       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:26:38.345063       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:26:38.345186       1 main.go:299] handling current node
	I1001 19:26:38.345230       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:26:38.345253       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:26:38.345420       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I1001 19:26:38.345447       1 main.go:322] Node ha-193737-m03 has CIDR [10.244.2.0/24] 
	I1001 19:26:38.345532       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:26:38.345554       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:26:48.348795       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:26:48.348915       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:26:48.349232       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I1001 19:26:48.349245       1 main.go:322] Node ha-193737-m03 has CIDR [10.244.2.0/24] 
	I1001 19:26:48.349309       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:26:48.349316       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:26:48.349384       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:26:48.349392       1 main.go:299] handling current node
	I1001 19:26:58.353065       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:26:58.353567       1 main.go:299] handling current node
	I1001 19:26:58.353642       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:26:58.353908       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:26:58.354113       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I1001 19:26:58.354274       1 main.go:322] Node ha-193737-m03 has CIDR [10.244.2.0/24] 
	I1001 19:26:58.354412       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:26:58.354463       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [d2c57920320eb65f4477cbff8aec818a7ecddb3461a77ca111172e4d1a5f7e71] <==
	I1001 19:20:35.856444       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1001 19:20:35.965501       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1001 19:21:24.240949       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1001 19:21:24.240967       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 17.015µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1001 19:21:24.242740       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1001 19:21:24.244065       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1001 19:21:24.245377       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.686767ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1001 19:23:11.375797       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53914: use of closed network connection
	E1001 19:23:11.551258       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53928: use of closed network connection
	E1001 19:23:11.731362       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53936: use of closed network connection
	E1001 19:23:11.972041       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53954: use of closed network connection
	E1001 19:23:12.366625       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53984: use of closed network connection
	E1001 19:23:12.546073       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54012: use of closed network connection
	E1001 19:23:12.732610       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54022: use of closed network connection
	E1001 19:23:12.902151       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54038: use of closed network connection
	E1001 19:23:13.375286       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54102: use of closed network connection
	E1001 19:23:13.554664       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54126: use of closed network connection
	E1001 19:23:13.743236       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54138: use of closed network connection
	E1001 19:23:13.926913       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54164: use of closed network connection
	E1001 19:23:14.106331       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54176: use of closed network connection
	E1001 19:23:47.033544       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1001 19:23:47.034526       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 71.236µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1001 19:23:47.042011       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1001 19:23:47.046959       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1001 19:23:47.048673       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="15.259067ms" method="POST" path="/api/v1/namespaces/default/events" result=null
	
	
	==> kube-controller-manager [fc9d05172b801a89ec36c0d0d549bfeee1aafb67f6586e3f4f163cb63f01d062] <==
	I1001 19:23:46.953662       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-193737-m04\" does not exist"
	I1001 19:23:46.986878       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-193737-m04" podCIDRs=["10.244.3.0/24"]
	I1001 19:23:46.986941       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:46.987007       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:47.215804       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:47.592799       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:50.155095       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-193737-m04"
	I1001 19:23:50.259908       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:51.578375       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:51.680209       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:51.931826       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:52.014093       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:23:57.305544       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:24:06.597966       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:24:06.598358       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-193737-m04"
	I1001 19:24:06.614401       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:24:06.949883       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:24:17.699273       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:25:00.186561       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m02"
	I1001 19:25:00.186799       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-193737-m04"
	I1001 19:25:00.216973       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m02"
	I1001 19:25:00.303275       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="68.678995ms"
	I1001 19:25:00.303561       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="142.589µs"
	I1001 19:25:01.983529       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m02"
	I1001 19:25:05.453661       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m02"
	
	
	==> kube-proxy [6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 19:20:37.420079       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 19:20:37.442921       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.14"]
	E1001 19:20:37.443047       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 19:20:37.482251       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 19:20:37.482297       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 19:20:37.482322       1 server_linux.go:169] "Using iptables Proxier"
	I1001 19:20:37.485863       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 19:20:37.486623       1 server.go:483] "Version info" version="v1.31.1"
	I1001 19:20:37.486654       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:20:37.489107       1 config.go:199] "Starting service config controller"
	I1001 19:20:37.489328       1 config.go:105] "Starting endpoint slice config controller"
	I1001 19:20:37.489656       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 19:20:37.489772       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 19:20:37.491468       1 config.go:328] "Starting node config controller"
	I1001 19:20:37.491495       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 19:20:37.590528       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 19:20:37.590619       1 shared_informer.go:320] Caches are synced for service config
	I1001 19:20:37.591994       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7] <==
	E1001 19:20:29.084572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1001 19:20:30.974700       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1001 19:23:06.369501       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rbjkx\": pod busybox-7dff88458-rbjkx is already assigned to node \"ha-193737\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rbjkx" node="ha-193737"
	E1001 19:23:06.370091       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ba3ecbe1-fb88-4674-b679-a442b28cd68e(default/busybox-7dff88458-rbjkx) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-rbjkx"
	E1001 19:23:06.370388       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rbjkx\": pod busybox-7dff88458-rbjkx is already assigned to node \"ha-193737\"" pod="default/busybox-7dff88458-rbjkx"
	I1001 19:23:06.374870       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-rbjkx" node="ha-193737"
	E1001 19:23:06.474319       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-9k8vh is already present in the active queue" pod="default/busybox-7dff88458-9k8vh"
	E1001 19:23:06.510626       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-x4nmn is already present in the active queue" pod="default/busybox-7dff88458-x4nmn"
	E1001 19:23:47.032927       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-tfcsk\": pod kindnet-tfcsk is already assigned to node \"ha-193737-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-tfcsk" node="ha-193737-m04"
	E1001 19:23:47.033064       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-tfcsk\": pod kindnet-tfcsk is already assigned to node \"ha-193737-m04\"" pod="kube-system/kindnet-tfcsk"
	E1001 19:23:47.032927       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-hz2nn\": pod kube-proxy-hz2nn is already assigned to node \"ha-193737-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-hz2nn" node="ha-193737-m04"
	E1001 19:23:47.045815       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4f960179-106c-4201-b54b-eea8c5aea0dc(kube-system/kube-proxy-hz2nn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-hz2nn"
	E1001 19:23:47.046589       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-hz2nn\": pod kube-proxy-hz2nn is already assigned to node \"ha-193737-m04\"" pod="kube-system/kube-proxy-hz2nn"
	I1001 19:23:47.046769       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-hz2nn" node="ha-193737-m04"
	E1001 19:23:47.062993       1 schedule_one.go:953] "Scheduler cache AssumePod failed" err="pod 046c48a4-b41b-4a77-8949-aa553947416b(kube-system/kindnet-h886q) is in the cache, so can't be assumed" pod="kube-system/kindnet-h886q"
	E1001 19:23:47.065004       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="pod 046c48a4-b41b-4a77-8949-aa553947416b(kube-system/kindnet-h886q) is in the cache, so can't be assumed" pod="kube-system/kindnet-h886q"
	I1001 19:23:47.065109       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-h886q" node="ha-193737-m04"
	E1001 19:23:47.081592       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-z5qhk\": pod kube-proxy-z5qhk is already assigned to node \"ha-193737-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-z5qhk" node="ha-193737-m04"
	E1001 19:23:47.081864       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 785d6c85-2697-4f02-80a4-55483a0faa64(kube-system/kube-proxy-z5qhk) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-z5qhk"
	E1001 19:23:47.081920       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-z5qhk\": pod kube-proxy-z5qhk is already assigned to node \"ha-193737-m04\"" pod="kube-system/kube-proxy-z5qhk"
	I1001 19:23:47.083299       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-z5qhk" node="ha-193737-m04"
	E1001 19:23:47.138476       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4q2pc\": pod kindnet-4q2pc is already assigned to node \"ha-193737-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4q2pc" node="ha-193737-m04"
	E1001 19:23:47.138649       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f23b02a5-c64e-44c3-83b9-7192d19a6efc(kube-system/kindnet-4q2pc) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-4q2pc"
	E1001 19:23:47.138779       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4q2pc\": pod kindnet-4q2pc is already assigned to node \"ha-193737-m04\"" pod="kube-system/kindnet-4q2pc"
	I1001 19:23:47.138823       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-4q2pc" node="ha-193737-m04"
	
	
	==> kubelet <==
	Oct 01 19:25:31 ha-193737 kubelet[1313]: E1001 19:25:31.112855    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810731112438565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:25:31 ha-193737 kubelet[1313]: E1001 19:25:31.112899    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810731112438565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:25:41 ha-193737 kubelet[1313]: E1001 19:25:41.114457    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810741114104863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:25:41 ha-193737 kubelet[1313]: E1001 19:25:41.114791    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810741114104863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:25:51 ha-193737 kubelet[1313]: E1001 19:25:51.116278    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810751115811001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:25:51 ha-193737 kubelet[1313]: E1001 19:25:51.116653    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810751115811001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:01 ha-193737 kubelet[1313]: E1001 19:26:01.119303    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810761118827447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:01 ha-193737 kubelet[1313]: E1001 19:26:01.119351    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810761118827447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:11 ha-193737 kubelet[1313]: E1001 19:26:11.121360    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810771121035313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:11 ha-193737 kubelet[1313]: E1001 19:26:11.121412    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810771121035313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:21 ha-193737 kubelet[1313]: E1001 19:26:21.123512    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810781123120430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:21 ha-193737 kubelet[1313]: E1001 19:26:21.123938    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810781123120430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:31 ha-193737 kubelet[1313]: E1001 19:26:31.044582    1313 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 19:26:31 ha-193737 kubelet[1313]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 19:26:31 ha-193737 kubelet[1313]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 19:26:31 ha-193737 kubelet[1313]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 19:26:31 ha-193737 kubelet[1313]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 19:26:31 ha-193737 kubelet[1313]: E1001 19:26:31.126194    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810791125910385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:31 ha-193737 kubelet[1313]: E1001 19:26:31.126217    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810791125910385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:41 ha-193737 kubelet[1313]: E1001 19:26:41.128087    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810801127576002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:41 ha-193737 kubelet[1313]: E1001 19:26:41.128431    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810801127576002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:51 ha-193737 kubelet[1313]: E1001 19:26:51.130945    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810811130429680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:26:51 ha-193737 kubelet[1313]: E1001 19:26:51.131267    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810811130429680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:27:01 ha-193737 kubelet[1313]: E1001 19:27:01.134172    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810821133580624,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:27:01 ha-193737 kubelet[1313]: E1001 19:27:01.134202    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727810821133580624,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-193737 -n ha-193737
helpers_test.go:261: (dbg) Run:  kubectl --context ha-193737 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (374.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-193737 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-193737 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-193737 -v=7 --alsologtostderr: exit status 82 (2m1.856079371s)

                                                
                                                
-- stdout --
	* Stopping node "ha-193737-m04"  ...
	* Stopping node "ha-193737-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 19:27:07.081724   36853 out.go:345] Setting OutFile to fd 1 ...
	I1001 19:27:07.081991   36853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:27:07.082006   36853 out.go:358] Setting ErrFile to fd 2...
	I1001 19:27:07.082011   36853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:27:07.082228   36853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 19:27:07.082478   36853 out.go:352] Setting JSON to false
	I1001 19:27:07.082572   36853 mustload.go:65] Loading cluster: ha-193737
	I1001 19:27:07.082962   36853 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:27:07.083047   36853 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:27:07.083248   36853 mustload.go:65] Loading cluster: ha-193737
	I1001 19:27:07.083405   36853 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:27:07.083434   36853 stop.go:39] StopHost: ha-193737-m04
	I1001 19:27:07.083837   36853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:27:07.083886   36853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:27:07.099606   36853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37393
	I1001 19:27:07.100096   36853 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:27:07.100679   36853 main.go:141] libmachine: Using API Version  1
	I1001 19:27:07.100711   36853 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:27:07.101037   36853 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:27:07.103398   36853 out.go:177] * Stopping node "ha-193737-m04"  ...
	I1001 19:27:07.104338   36853 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1001 19:27:07.104391   36853 main.go:141] libmachine: (ha-193737-m04) Calling .DriverName
	I1001 19:27:07.104650   36853 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1001 19:27:07.104695   36853 main.go:141] libmachine: (ha-193737-m04) Calling .GetSSHHostname
	I1001 19:27:07.108106   36853 main.go:141] libmachine: (ha-193737-m04) DBG | domain ha-193737-m04 has defined MAC address 52:54:00:18:e8:54 in network mk-ha-193737
	I1001 19:27:07.108598   36853 main.go:141] libmachine: (ha-193737-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:e8:54", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:23:28 +0000 UTC Type:0 Mac:52:54:00:18:e8:54 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-193737-m04 Clientid:01:52:54:00:18:e8:54}
	I1001 19:27:07.108633   36853 main.go:141] libmachine: (ha-193737-m04) DBG | domain ha-193737-m04 has defined IP address 192.168.39.152 and MAC address 52:54:00:18:e8:54 in network mk-ha-193737
	I1001 19:27:07.108844   36853 main.go:141] libmachine: (ha-193737-m04) Calling .GetSSHPort
	I1001 19:27:07.109086   36853 main.go:141] libmachine: (ha-193737-m04) Calling .GetSSHKeyPath
	I1001 19:27:07.109249   36853 main.go:141] libmachine: (ha-193737-m04) Calling .GetSSHUsername
	I1001 19:27:07.109386   36853 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m04/id_rsa Username:docker}
	I1001 19:27:07.202153   36853 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1001 19:27:07.257492   36853 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1001 19:27:07.311206   36853 main.go:141] libmachine: Stopping "ha-193737-m04"...
	I1001 19:27:07.311239   36853 main.go:141] libmachine: (ha-193737-m04) Calling .GetState
	I1001 19:27:07.313352   36853 main.go:141] libmachine: (ha-193737-m04) Calling .Stop
	I1001 19:27:07.317310   36853 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 0/120
	I1001 19:27:08.434693   36853 main.go:141] libmachine: (ha-193737-m04) Calling .GetState
	I1001 19:27:08.436238   36853 main.go:141] libmachine: Machine "ha-193737-m04" was stopped.
	I1001 19:27:08.436259   36853 stop.go:75] duration metric: took 1.331922423s to stop
	I1001 19:27:08.436312   36853 stop.go:39] StopHost: ha-193737-m03
	I1001 19:27:08.436666   36853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:27:08.436713   36853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:27:08.455471   36853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42365
	I1001 19:27:08.455929   36853 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:27:08.456474   36853 main.go:141] libmachine: Using API Version  1
	I1001 19:27:08.456501   36853 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:27:08.456960   36853 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:27:08.458539   36853 out.go:177] * Stopping node "ha-193737-m03"  ...
	I1001 19:27:08.459700   36853 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1001 19:27:08.459730   36853 main.go:141] libmachine: (ha-193737-m03) Calling .DriverName
	I1001 19:27:08.460013   36853 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1001 19:27:08.460036   36853 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHHostname
	I1001 19:27:08.463523   36853 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:27:08.464165   36853 main.go:141] libmachine: (ha-193737-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b9:5c", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:22:04 +0000 UTC Type:0 Mac:52:54:00:9e:b9:5c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-193737-m03 Clientid:01:52:54:00:9e:b9:5c}
	I1001 19:27:08.464194   36853 main.go:141] libmachine: (ha-193737-m03) DBG | domain ha-193737-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:9e:b9:5c in network mk-ha-193737
	I1001 19:27:08.464560   36853 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHPort
	I1001 19:27:08.464764   36853 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHKeyPath
	I1001 19:27:08.464899   36853 main.go:141] libmachine: (ha-193737-m03) Calling .GetSSHUsername
	I1001 19:27:08.465012   36853 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m03/id_rsa Username:docker}
	I1001 19:27:08.564786   36853 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1001 19:27:08.622131   36853 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1001 19:27:08.678977   36853 main.go:141] libmachine: Stopping "ha-193737-m03"...
	I1001 19:27:08.679008   36853 main.go:141] libmachine: (ha-193737-m03) Calling .GetState
	I1001 19:27:08.680802   36853 main.go:141] libmachine: (ha-193737-m03) Calling .Stop
	I1001 19:27:08.684791   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 0/120
	I1001 19:27:09.686527   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 1/120
	I1001 19:27:10.688013   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 2/120
	I1001 19:27:11.689802   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 3/120
	I1001 19:27:12.691181   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 4/120
	I1001 19:27:13.693140   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 5/120
	I1001 19:27:14.695315   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 6/120
	I1001 19:27:15.697234   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 7/120
	I1001 19:27:16.698735   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 8/120
	I1001 19:27:17.700250   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 9/120
	I1001 19:27:18.702517   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 10/120
	I1001 19:27:19.704179   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 11/120
	I1001 19:27:20.706013   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 12/120
	I1001 19:27:21.707494   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 13/120
	I1001 19:27:22.709223   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 14/120
	I1001 19:27:23.711320   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 15/120
	I1001 19:27:24.712995   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 16/120
	I1001 19:27:25.714849   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 17/120
	I1001 19:27:26.716495   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 18/120
	I1001 19:27:27.718064   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 19/120
	I1001 19:27:28.719914   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 20/120
	I1001 19:27:29.721672   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 21/120
	I1001 19:27:30.723338   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 22/120
	I1001 19:27:31.725246   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 23/120
	I1001 19:27:32.726705   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 24/120
	I1001 19:27:33.728961   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 25/120
	I1001 19:27:34.730965   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 26/120
	I1001 19:27:35.732620   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 27/120
	I1001 19:27:36.734331   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 28/120
	I1001 19:27:37.735836   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 29/120
	I1001 19:27:38.738189   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 30/120
	I1001 19:27:39.739704   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 31/120
	I1001 19:27:40.741417   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 32/120
	I1001 19:27:41.743159   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 33/120
	I1001 19:27:42.745278   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 34/120
	I1001 19:27:43.747208   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 35/120
	I1001 19:27:44.748777   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 36/120
	I1001 19:27:45.750320   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 37/120
	I1001 19:27:46.751645   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 38/120
	I1001 19:27:47.753317   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 39/120
	I1001 19:27:48.755113   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 40/120
	I1001 19:27:49.756523   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 41/120
	I1001 19:27:50.758207   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 42/120
	I1001 19:27:51.759352   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 43/120
	I1001 19:27:52.761161   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 44/120
	I1001 19:27:53.763202   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 45/120
	I1001 19:27:54.764848   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 46/120
	I1001 19:27:55.766050   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 47/120
	I1001 19:27:56.767420   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 48/120
	I1001 19:27:57.769020   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 49/120
	I1001 19:27:58.770978   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 50/120
	I1001 19:27:59.772459   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 51/120
	I1001 19:28:00.773798   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 52/120
	I1001 19:28:01.775482   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 53/120
	I1001 19:28:02.776873   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 54/120
	I1001 19:28:03.778545   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 55/120
	I1001 19:28:04.780175   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 56/120
	I1001 19:28:05.781725   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 57/120
	I1001 19:28:06.783369   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 58/120
	I1001 19:28:07.784772   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 59/120
	I1001 19:28:08.786583   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 60/120
	I1001 19:28:09.787942   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 61/120
	I1001 19:28:10.789251   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 62/120
	I1001 19:28:11.790818   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 63/120
	I1001 19:28:12.792192   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 64/120
	I1001 19:28:13.794249   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 65/120
	I1001 19:28:14.795601   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 66/120
	I1001 19:28:15.796941   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 67/120
	I1001 19:28:16.798239   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 68/120
	I1001 19:28:17.799430   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 69/120
	I1001 19:28:18.801248   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 70/120
	I1001 19:28:19.802780   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 71/120
	I1001 19:28:20.804331   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 72/120
	I1001 19:28:21.806086   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 73/120
	I1001 19:28:22.807539   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 74/120
	I1001 19:28:23.809464   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 75/120
	I1001 19:28:24.811408   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 76/120
	I1001 19:28:25.813272   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 77/120
	I1001 19:28:26.814565   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 78/120
	I1001 19:28:27.816473   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 79/120
	I1001 19:28:28.818612   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 80/120
	I1001 19:28:29.820683   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 81/120
	I1001 19:28:30.822245   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 82/120
	I1001 19:28:31.824432   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 83/120
	I1001 19:28:32.825914   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 84/120
	I1001 19:28:33.828189   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 85/120
	I1001 19:28:34.829611   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 86/120
	I1001 19:28:35.830968   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 87/120
	I1001 19:28:36.832731   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 88/120
	I1001 19:28:37.834359   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 89/120
	I1001 19:28:38.836233   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 90/120
	I1001 19:28:39.837662   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 91/120
	I1001 19:28:40.839279   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 92/120
	I1001 19:28:41.840828   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 93/120
	I1001 19:28:42.842419   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 94/120
	I1001 19:28:43.843974   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 95/120
	I1001 19:28:44.845413   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 96/120
	I1001 19:28:45.846992   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 97/120
	I1001 19:28:46.848730   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 98/120
	I1001 19:28:47.850184   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 99/120
	I1001 19:28:48.852213   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 100/120
	I1001 19:28:49.853780   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 101/120
	I1001 19:28:50.855223   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 102/120
	I1001 19:28:51.856694   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 103/120
	I1001 19:28:52.858291   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 104/120
	I1001 19:28:53.860545   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 105/120
	I1001 19:28:54.862396   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 106/120
	I1001 19:28:55.864050   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 107/120
	I1001 19:28:56.865935   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 108/120
	I1001 19:28:57.867424   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 109/120
	I1001 19:28:58.869444   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 110/120
	I1001 19:28:59.871156   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 111/120
	I1001 19:29:00.872546   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 112/120
	I1001 19:29:01.873920   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 113/120
	I1001 19:29:02.875600   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 114/120
	I1001 19:29:03.878645   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 115/120
	I1001 19:29:04.880324   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 116/120
	I1001 19:29:05.881715   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 117/120
	I1001 19:29:06.883937   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 118/120
	I1001 19:29:07.885481   36853 main.go:141] libmachine: (ha-193737-m03) Waiting for machine to stop 119/120
	I1001 19:29:08.886468   36853 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1001 19:29:08.886523   36853 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1001 19:29:08.888220   36853 out.go:201] 
	W1001 19:29:08.889414   36853 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1001 19:29:08.889432   36853 out.go:270] * 
	* 
	W1001 19:29:08.891547   36853 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 19:29:08.892793   36853 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-193737 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-193737 --wait=true -v=7 --alsologtostderr
E1001 19:31:34.840293   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:31:59.025473   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-193737 --wait=true -v=7 --alsologtostderr: (4m9.878044877s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-193737
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-193737 -n ha-193737
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-193737 logs -n 25: (2.048719496s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-193737 cp ha-193737-m03:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m02:/home/docker/cp-test_ha-193737-m03_ha-193737-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737-m02 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m03_ha-193737-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m03:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04:/home/docker/cp-test_ha-193737-m03_ha-193737-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737-m04 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m03_ha-193737-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-193737 cp testdata/cp-test.txt                                                | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1122815483/001/cp-test_ha-193737-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737:/home/docker/cp-test_ha-193737-m04_ha-193737.txt                       |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737 sudo cat                                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m04_ha-193737.txt                                 |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m02:/home/docker/cp-test_ha-193737-m04_ha-193737-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737-m02 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m04_ha-193737-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03:/home/docker/cp-test_ha-193737-m04_ha-193737-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737-m03 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m04_ha-193737-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-193737 node stop m02 -v=7                                                     | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-193737 node start m02 -v=7                                                    | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-193737 -v=7                                                           | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:27 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-193737 -v=7                                                                | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:27 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-193737 --wait=true -v=7                                                    | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:29 UTC | 01 Oct 24 19:33 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-193737                                                                | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:33 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 19:29:08
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 19:29:08.939916   37328 out.go:345] Setting OutFile to fd 1 ...
	I1001 19:29:08.940061   37328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:29:08.940070   37328 out.go:358] Setting ErrFile to fd 2...
	I1001 19:29:08.940075   37328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:29:08.940255   37328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 19:29:08.940925   37328 out.go:352] Setting JSON to false
	I1001 19:29:08.941970   37328 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4291,"bootTime":1727806658,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 19:29:08.942092   37328 start.go:139] virtualization: kvm guest
	I1001 19:29:08.944107   37328 out.go:177] * [ha-193737] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 19:29:08.945224   37328 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 19:29:08.945250   37328 notify.go:220] Checking for updates...
	I1001 19:29:08.947677   37328 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 19:29:08.948984   37328 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:29:08.950121   37328 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:29:08.951135   37328 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 19:29:08.952383   37328 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 19:29:08.954030   37328 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:29:08.954158   37328 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 19:29:08.954851   37328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:29:08.954930   37328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:29:08.972315   37328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39757
	I1001 19:29:08.972863   37328 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:29:08.973399   37328 main.go:141] libmachine: Using API Version  1
	I1001 19:29:08.973418   37328 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:29:08.973808   37328 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:29:08.974026   37328 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:29:09.016232   37328 out.go:177] * Using the kvm2 driver based on existing profile
	I1001 19:29:09.017238   37328 start.go:297] selected driver: kvm2
	I1001 19:29:09.017255   37328 start.go:901] validating driver "kvm2" against &{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.152 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:29:09.017399   37328 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 19:29:09.017748   37328 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 19:29:09.017862   37328 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 19:29:09.034140   37328 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 19:29:09.035179   37328 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 19:29:09.035220   37328 cni.go:84] Creating CNI manager for ""
	I1001 19:29:09.035272   37328 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1001 19:29:09.035346   37328 start.go:340] cluster config:
	{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.3
9.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.152 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:
false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:29:09.035492   37328 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 19:29:09.037143   37328 out.go:177] * Starting "ha-193737" primary control-plane node in "ha-193737" cluster
	I1001 19:29:09.038208   37328 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:29:09.038259   37328 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 19:29:09.038271   37328 cache.go:56] Caching tarball of preloaded images
	I1001 19:29:09.038387   37328 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 19:29:09.038403   37328 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 19:29:09.038571   37328 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:29:09.038832   37328 start.go:360] acquireMachinesLock for ha-193737: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 19:29:09.038876   37328 start.go:364] duration metric: took 24.118µs to acquireMachinesLock for "ha-193737"
	I1001 19:29:09.038889   37328 start.go:96] Skipping create...Using existing machine configuration
	I1001 19:29:09.038894   37328 fix.go:54] fixHost starting: 
	I1001 19:29:09.039166   37328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:29:09.039202   37328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:29:09.054402   37328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38879
	I1001 19:29:09.054892   37328 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:29:09.055382   37328 main.go:141] libmachine: Using API Version  1
	I1001 19:29:09.055403   37328 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:29:09.055772   37328 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:29:09.055973   37328 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:29:09.056124   37328 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:29:09.057794   37328 fix.go:112] recreateIfNeeded on ha-193737: state=Running err=<nil>
	W1001 19:29:09.057829   37328 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 19:29:09.059519   37328 out.go:177] * Updating the running kvm2 "ha-193737" VM ...
	I1001 19:29:09.060793   37328 machine.go:93] provisionDockerMachine start ...
	I1001 19:29:09.060817   37328 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:29:09.061040   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:29:09.063725   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.064214   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:29:09.064240   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.064406   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:29:09.064594   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:29:09.064743   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:29:09.064855   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:29:09.065011   37328 main.go:141] libmachine: Using SSH client type: native
	I1001 19:29:09.065203   37328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:29:09.065215   37328 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 19:29:09.177577   37328 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-193737
	
	I1001 19:29:09.177611   37328 main.go:141] libmachine: (ha-193737) Calling .GetMachineName
	I1001 19:29:09.177843   37328 buildroot.go:166] provisioning hostname "ha-193737"
	I1001 19:29:09.177912   37328 main.go:141] libmachine: (ha-193737) Calling .GetMachineName
	I1001 19:29:09.178172   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:29:09.181484   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.181951   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:29:09.181971   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.182120   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:29:09.182311   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:29:09.182437   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:29:09.182548   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:29:09.182728   37328 main.go:141] libmachine: Using SSH client type: native
	I1001 19:29:09.182945   37328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:29:09.182966   37328 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-193737 && echo "ha-193737" | sudo tee /etc/hostname
	I1001 19:29:09.305362   37328 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-193737
	
	I1001 19:29:09.305390   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:29:09.308770   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.309176   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:29:09.309201   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.309443   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:29:09.309651   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:29:09.309888   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:29:09.310094   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:29:09.310355   37328 main.go:141] libmachine: Using SSH client type: native
	I1001 19:29:09.310549   37328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:29:09.310572   37328 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-193737' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-193737/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-193737' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 19:29:09.417404   37328 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:29:09.417436   37328 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 19:29:09.417481   37328 buildroot.go:174] setting up certificates
	I1001 19:29:09.417503   37328 provision.go:84] configureAuth start
	I1001 19:29:09.417518   37328 main.go:141] libmachine: (ha-193737) Calling .GetMachineName
	I1001 19:29:09.417786   37328 main.go:141] libmachine: (ha-193737) Calling .GetIP
	I1001 19:29:09.420372   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.420836   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:29:09.420865   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.421099   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:29:09.423481   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.423848   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:29:09.423884   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.424042   37328 provision.go:143] copyHostCerts
	I1001 19:29:09.424072   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:29:09.424128   37328 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 19:29:09.424137   37328 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:29:09.424205   37328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 19:29:09.424290   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:29:09.424307   37328 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 19:29:09.424320   37328 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:29:09.424346   37328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 19:29:09.424431   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:29:09.424451   37328 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 19:29:09.424455   37328 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:29:09.424492   37328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 19:29:09.424554   37328 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.ha-193737 san=[127.0.0.1 192.168.39.14 ha-193737 localhost minikube]
	I1001 19:29:09.534187   37328 provision.go:177] copyRemoteCerts
	I1001 19:29:09.534239   37328 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 19:29:09.534260   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:29:09.537352   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.537737   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:29:09.537765   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.537981   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:29:09.538152   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:29:09.538302   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:29:09.538393   37328 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:29:09.619235   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 19:29:09.619333   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1001 19:29:09.645348   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 19:29:09.645438   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 19:29:09.673071   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 19:29:09.673151   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 19:29:09.704248   37328 provision.go:87] duration metric: took 286.730847ms to configureAuth
	I1001 19:29:09.704279   37328 buildroot.go:189] setting minikube options for container-runtime
	I1001 19:29:09.704615   37328 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:29:09.704693   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:29:09.707374   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.707795   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:29:09.707823   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.708006   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:29:09.708215   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:29:09.708350   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:29:09.708482   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:29:09.708621   37328 main.go:141] libmachine: Using SSH client type: native
	I1001 19:29:09.708823   37328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:29:09.708847   37328 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 19:30:40.599375   37328 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 19:30:40.599407   37328 machine.go:96] duration metric: took 1m31.538596323s to provisionDockerMachine
	I1001 19:30:40.599423   37328 start.go:293] postStartSetup for "ha-193737" (driver="kvm2")
	I1001 19:30:40.599437   37328 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 19:30:40.599486   37328 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:30:40.599815   37328 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 19:30:40.599849   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:30:40.603054   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.603452   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:30:40.603476   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.603668   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:30:40.603834   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:30:40.604021   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:30:40.604162   37328 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:30:40.687847   37328 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 19:30:40.692107   37328 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 19:30:40.692146   37328 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 19:30:40.692208   37328 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 19:30:40.692279   37328 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 19:30:40.692289   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /etc/ssl/certs/184302.pem
	I1001 19:30:40.692420   37328 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 19:30:40.701750   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:30:40.725539   37328 start.go:296] duration metric: took 126.10159ms for postStartSetup
	I1001 19:30:40.725576   37328 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:30:40.725867   37328 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1001 19:30:40.725892   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:30:40.728740   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.729170   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:30:40.729197   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.729648   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:30:40.730783   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:30:40.731004   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:30:40.731156   37328 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	W1001 19:30:40.814694   37328 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1001 19:30:40.814729   37328 fix.go:56] duration metric: took 1m31.775834652s for fixHost
	I1001 19:30:40.814757   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:30:40.817578   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.818056   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:30:40.818091   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.818248   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:30:40.818449   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:30:40.818604   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:30:40.818723   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:30:40.818870   37328 main.go:141] libmachine: Using SSH client type: native
	I1001 19:30:40.819096   37328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:30:40.819109   37328 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 19:30:40.921284   37328 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727811040.894124143
	
	I1001 19:30:40.921304   37328 fix.go:216] guest clock: 1727811040.894124143
	I1001 19:30:40.921312   37328 fix.go:229] Guest: 2024-10-01 19:30:40.894124143 +0000 UTC Remote: 2024-10-01 19:30:40.81474032 +0000 UTC m=+91.911975595 (delta=79.383823ms)
	I1001 19:30:40.921331   37328 fix.go:200] guest clock delta is within tolerance: 79.383823ms
	I1001 19:30:40.921336   37328 start.go:83] releasing machines lock for "ha-193737", held for 1m31.882452335s
	I1001 19:30:40.921356   37328 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:30:40.921608   37328 main.go:141] libmachine: (ha-193737) Calling .GetIP
	I1001 19:30:40.924593   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.925006   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:30:40.925027   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.925218   37328 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:30:40.925706   37328 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:30:40.925881   37328 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:30:40.925992   37328 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 19:30:40.926028   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:30:40.926102   37328 ssh_runner.go:195] Run: cat /version.json
	I1001 19:30:40.926126   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:30:40.928744   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.928801   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.929178   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:30:40.929206   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.929233   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:30:40.929247   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.929373   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:30:40.929501   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:30:40.929578   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:30:40.929650   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:30:40.929722   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:30:40.929787   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:30:40.929824   37328 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:30:40.929894   37328 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:30:41.045553   37328 ssh_runner.go:195] Run: systemctl --version
	I1001 19:30:41.051502   37328 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 19:30:41.219086   37328 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 19:30:41.225476   37328 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 19:30:41.225565   37328 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 19:30:41.236020   37328 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1001 19:30:41.236050   37328 start.go:495] detecting cgroup driver to use...
	I1001 19:30:41.236122   37328 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 19:30:41.253549   37328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 19:30:41.269349   37328 docker.go:217] disabling cri-docker service (if available) ...
	I1001 19:30:41.269421   37328 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 19:30:41.284876   37328 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 19:30:41.299341   37328 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 19:30:41.453531   37328 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 19:30:41.598069   37328 docker.go:233] disabling docker service ...
	I1001 19:30:41.598135   37328 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 19:30:41.615329   37328 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 19:30:41.628733   37328 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 19:30:41.776481   37328 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 19:30:41.934366   37328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 19:30:41.947596   37328 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 19:30:41.966515   37328 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 19:30:41.966592   37328 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:30:41.977069   37328 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 19:30:41.977135   37328 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:30:41.987034   37328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:30:41.997115   37328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:30:42.007263   37328 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 19:30:42.017806   37328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:30:42.028946   37328 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:30:42.040035   37328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:30:42.050185   37328 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 19:30:42.059298   37328 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 19:30:42.068579   37328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:30:42.226919   37328 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 19:30:42.463910   37328 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 19:30:42.463995   37328 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 19:30:42.469021   37328 start.go:563] Will wait 60s for crictl version
	I1001 19:30:42.469086   37328 ssh_runner.go:195] Run: which crictl
	I1001 19:30:42.472762   37328 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 19:30:42.511526   37328 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 19:30:42.511607   37328 ssh_runner.go:195] Run: crio --version
	I1001 19:30:42.540609   37328 ssh_runner.go:195] Run: crio --version
	I1001 19:30:42.571459   37328 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 19:30:42.572552   37328 main.go:141] libmachine: (ha-193737) Calling .GetIP
	I1001 19:30:42.575271   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:42.575645   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:30:42.575669   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:42.575882   37328 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 19:30:42.580521   37328 kubeadm.go:883] updating cluster {Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.152 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 19:30:42.580640   37328 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:30:42.580679   37328 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 19:30:42.623368   37328 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 19:30:42.623391   37328 crio.go:433] Images already preloaded, skipping extraction
	I1001 19:30:42.623440   37328 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 19:30:42.659185   37328 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 19:30:42.659208   37328 cache_images.go:84] Images are preloaded, skipping loading
	I1001 19:30:42.659226   37328 kubeadm.go:934] updating node { 192.168.39.14 8443 v1.31.1 crio true true} ...
	I1001 19:30:42.659340   37328 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-193737 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 19:30:42.659416   37328 ssh_runner.go:195] Run: crio config
	I1001 19:30:42.706099   37328 cni.go:84] Creating CNI manager for ""
	I1001 19:30:42.706123   37328 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1001 19:30:42.706133   37328 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 19:30:42.706154   37328 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.14 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-193737 NodeName:ha-193737 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 19:30:42.706281   37328 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-193737"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 19:30:42.706301   37328 kube-vip.go:115] generating kube-vip config ...
	I1001 19:30:42.706336   37328 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 19:30:42.718095   37328 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 19:30:42.718208   37328 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1001 19:30:42.718272   37328 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 19:30:42.728080   37328 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 19:30:42.728148   37328 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1001 19:30:42.737386   37328 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1001 19:30:42.754003   37328 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 19:30:42.770286   37328 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1001 19:30:42.786791   37328 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1001 19:30:42.803229   37328 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 19:30:42.808100   37328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:30:42.957282   37328 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:30:42.971531   37328 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737 for IP: 192.168.39.14
	I1001 19:30:42.971555   37328 certs.go:194] generating shared ca certs ...
	I1001 19:30:42.971576   37328 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:30:42.971738   37328 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 19:30:42.971793   37328 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 19:30:42.971807   37328 certs.go:256] generating profile certs ...
	I1001 19:30:42.971890   37328 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key
	I1001 19:30:42.971924   37328 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.058751ee
	I1001 19:30:42.971954   37328 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.058751ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.14 192.168.39.27 192.168.39.101 192.168.39.254]
	I1001 19:30:43.156442   37328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.058751ee ...
	I1001 19:30:43.156481   37328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.058751ee: {Name:mk398f3bf2de18eb9255f2abe557f9ee8d4c74e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:30:43.156690   37328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.058751ee ...
	I1001 19:30:43.156707   37328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.058751ee: {Name:mk011ee79e6c6902067af04844ffcc7247fec588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:30:43.156812   37328 certs.go:381] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.058751ee -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt
	I1001 19:30:43.156997   37328 certs.go:385] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.058751ee -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key
	I1001 19:30:43.157157   37328 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key
	I1001 19:30:43.157175   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 19:30:43.157195   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 19:30:43.157212   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 19:30:43.157233   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 19:30:43.157252   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 19:30:43.157271   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 19:30:43.157294   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 19:30:43.157312   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 19:30:43.157373   37328 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 19:30:43.157414   37328 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 19:30:43.157428   37328 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 19:30:43.157469   37328 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 19:30:43.157500   37328 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 19:30:43.157531   37328 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 19:30:43.157588   37328 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:30:43.157626   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /usr/share/ca-certificates/184302.pem
	I1001 19:30:43.157652   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:30:43.157670   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem -> /usr/share/ca-certificates/18430.pem
	I1001 19:30:43.158203   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 19:30:43.182962   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 19:30:43.209517   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 19:30:43.235236   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 19:30:43.259478   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1001 19:30:43.283750   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 19:30:43.307938   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 19:30:43.332315   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 19:30:43.356259   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 19:30:43.379982   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 19:30:43.403792   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 19:30:43.427640   37328 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 19:30:43.444144   37328 ssh_runner.go:195] Run: openssl version
	I1001 19:30:43.450283   37328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 19:30:43.461547   37328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 19:30:43.465989   37328 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 19:30:43.466049   37328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 19:30:43.471519   37328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 19:30:43.480515   37328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 19:30:43.490784   37328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 19:30:43.494982   37328 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 19:30:43.495021   37328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 19:30:43.500231   37328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 19:30:43.509685   37328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 19:30:43.520238   37328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:30:43.524704   37328 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:30:43.524758   37328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:30:43.530194   37328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 19:30:43.539135   37328 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 19:30:43.543573   37328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 19:30:43.549061   37328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 19:30:43.554388   37328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 19:30:43.559805   37328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 19:30:43.565256   37328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 19:30:43.570513   37328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 19:30:43.575858   37328 kubeadm.go:392] StartCluster: {Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.152 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:30:43.575964   37328 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 19:30:43.576003   37328 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 19:30:43.615896   37328 cri.go:89] found id: "55dcc0edc52a1be3b0b34b8c6d6bb9b7f606b5f9038d86694ef0cb9f8c2783a0"
	I1001 19:30:43.615918   37328 cri.go:89] found id: "01fbb357fab4f3446ed3564800db9f3d7f8ffa47c32db7026219630bec07a664"
	I1001 19:30:43.615922   37328 cri.go:89] found id: "ba9298ce250b67db2ca42f0c725e3969cbe562dea70767c2a9f85e8814364c27"
	I1001 19:30:43.615925   37328 cri.go:89] found id: "75485355206ed8610939d128b62d0d55ec84cbb8615f7261469f854e52b6447d"
	I1001 19:30:43.615928   37328 cri.go:89] found id: "b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3"
	I1001 19:30:43.615931   37328 cri.go:89] found id: "c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a"
	I1001 19:30:43.615933   37328 cri.go:89] found id: "25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525"
	I1001 19:30:43.615935   37328 cri.go:89] found id: "6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c"
	I1001 19:30:43.615939   37328 cri.go:89] found id: "c962c4138a0019c76f981b72e8948efd386403002bb686f7e70c38bc20c3d542"
	I1001 19:30:43.615943   37328 cri.go:89] found id: "7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e"
	I1001 19:30:43.615946   37328 cri.go:89] found id: "d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7"
	I1001 19:30:43.615951   37328 cri.go:89] found id: "d2c57920320eb65f4477cbff8aec818a7ecddb3461a77ca111172e4d1a5f7e71"
	I1001 19:30:43.615954   37328 cri.go:89] found id: "fc9d05172b801a89ec36c0d0d549bfeee1aafb67f6586e3f4f163cb63f01d062"
	I1001 19:30:43.615956   37328 cri.go:89] found id: ""
	I1001 19:30:43.615993   37328 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.513565450Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811199513529336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=810728ab-80bb-40b8-8708-bc2ac6e24769 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.514320375Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d158fc8c-97ac-4a9b-bf2e-01b9b91ab758 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.514425286Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d158fc8c-97ac-4a9b-bf2e-01b9b91ab758 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.515036233Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42d2996fa57056f337846a0c663c666896bc5623403716bf936f95a745c26751,PodSandboxId:d22b13768ce874522ceed8cc3440fb78e51adfcc55c5d48b10e0a1ba76f2fa79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727811144032445382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc73e66125bdf484da9d957113d2dbcd22b1cca191ae53bdd53cddf4df26a9b4,PodSandboxId:f0f8610a34814bf5843b3ba4d4c2b837552c02efdd35a53e341c9b8afa41b367,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727811095028393362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe0e73911c1f7037af24faf34585ea0f2dd2050508c66f82fceb1d3f63350357,PodSandboxId:d22b13768ce874522ceed8cc3440fb78e51adfcc55c5d48b10e0a1ba76f2fa79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727811095031826420,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0355d034cef455e56a215145c9058bc9694f5da8c3c4c2172ae416d5af558add,PodSandboxId:9ddcc19db3580843861d67b547a77d3a8d429b55c0a3c2ee3bba3ad96f6c726b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727811091033631042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da37e56f1a046294a51f258d6619d188b789c2740337a7586e481feaaca27edc,PodSandboxId:fca6d99cf42a83642203449fa4687750128541f4acf4a54e0e9f868f2262e0e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727811082317090540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:198dab162e8aa5e19a932dc97135bef548d2a2744de22fb4fa9898746a7a9788,PodSandboxId:1bd6f04d79ddbb801bfd04b223d0aa786f5e5f02458dde8c322d43b2862332ad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727811066130351937,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813365ea1e3446cbdf9a69d3a73954fd,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e64eec86b7056617efbd7c498622e3f5958857bac4d73be6899dbf8db5c89cf9,PodSandboxId:9133de98e8a424f31f1f22b1bf4d2d17ac28543288eb487a6118187c62434bd9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727811049246022559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c0e82b0f6c09d87ec13c643737a4ccf5b340e9502946d56bbb217eb96dbe93,PodSandboxId:9c0014ccd9ccba46c670e7c0f6df4fb143de9e63e48a89015e6738bae92e835b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727811048808876184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61ff92cf26d6e3aed32887454ab7d2058de0b8a5e7ea7861f48b1d01a5939727,PodSandboxId:b73cc006e86acbbdb4fd391f82012e597b411d2e2a99955226e63c46f802968b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727811049155845041,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a718e9dc3c409631fa8f5dc4d076b18ea96e3aa4e1019102c18b202167818924,PodSandboxId:7eb78842f19b505167c5397f172a99bb1f7b17780e57a3592433042cee608db5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727811049104667513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-5017-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83950c035f12e98c6757b6223f78b7f5a39d863ec5c60eac7c00e820c1c5c076,PodSandboxId:da377024b36877f6c3e94272b41630ae1d13493ca8d22c13b39a58463542dba5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727811048970844751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3382226e00b6ef4a63086f6faaee763c7a138978d0ef813c494eb8ffc1d02c5f,PodSandboxId:f0f8610a34814bf5843b3ba4d4c2b837552c02efdd35a53e341c9b8afa41b367,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727811048981094425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95bc5dbd279eda6d388ec30e614a300b3e7377edf477e235fd68a408b5928575,PodSandboxId:9ddcc19db3580843861d67b547a77d3a8d429b55c0a3c2ee3bba3ad96f6c726b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727811048887857582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d10a89edf2195041bf7b272302c3c39e01a5af56118e22a81c8e75031db83b8b,PodSandboxId:7761291d35db572afa54b59123e615dd81d37ee1fcc9ebe5e1715dd51dcac7c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727811048827632586,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Ann
otations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727810590371392470,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727810449363564194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727810449354599709,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-5017-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727810437214011626,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727810437061941516,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727810424745133125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727810424759398087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d158fc8c-97ac-4a9b-bf2e-01b9b91ab758 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.573607811Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8ed5dd4-e99a-432c-a787-72f548242337 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.573769081Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8ed5dd4-e99a-432c-a787-72f548242337 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.575321397Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ed4d27aa-7f1f-459c-9776-bf661fe986b2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.575964251Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811199575933445,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed4d27aa-7f1f-459c-9776-bf661fe986b2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.577192704Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=52eeec72-6ff6-4beb-90a5-f4047cf9a84f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.577271280Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=52eeec72-6ff6-4beb-90a5-f4047cf9a84f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.577935352Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42d2996fa57056f337846a0c663c666896bc5623403716bf936f95a745c26751,PodSandboxId:d22b13768ce874522ceed8cc3440fb78e51adfcc55c5d48b10e0a1ba76f2fa79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727811144032445382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc73e66125bdf484da9d957113d2dbcd22b1cca191ae53bdd53cddf4df26a9b4,PodSandboxId:f0f8610a34814bf5843b3ba4d4c2b837552c02efdd35a53e341c9b8afa41b367,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727811095028393362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe0e73911c1f7037af24faf34585ea0f2dd2050508c66f82fceb1d3f63350357,PodSandboxId:d22b13768ce874522ceed8cc3440fb78e51adfcc55c5d48b10e0a1ba76f2fa79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727811095031826420,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0355d034cef455e56a215145c9058bc9694f5da8c3c4c2172ae416d5af558add,PodSandboxId:9ddcc19db3580843861d67b547a77d3a8d429b55c0a3c2ee3bba3ad96f6c726b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727811091033631042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da37e56f1a046294a51f258d6619d188b789c2740337a7586e481feaaca27edc,PodSandboxId:fca6d99cf42a83642203449fa4687750128541f4acf4a54e0e9f868f2262e0e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727811082317090540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:198dab162e8aa5e19a932dc97135bef548d2a2744de22fb4fa9898746a7a9788,PodSandboxId:1bd6f04d79ddbb801bfd04b223d0aa786f5e5f02458dde8c322d43b2862332ad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727811066130351937,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813365ea1e3446cbdf9a69d3a73954fd,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e64eec86b7056617efbd7c498622e3f5958857bac4d73be6899dbf8db5c89cf9,PodSandboxId:9133de98e8a424f31f1f22b1bf4d2d17ac28543288eb487a6118187c62434bd9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727811049246022559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c0e82b0f6c09d87ec13c643737a4ccf5b340e9502946d56bbb217eb96dbe93,PodSandboxId:9c0014ccd9ccba46c670e7c0f6df4fb143de9e63e48a89015e6738bae92e835b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727811048808876184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61ff92cf26d6e3aed32887454ab7d2058de0b8a5e7ea7861f48b1d01a5939727,PodSandboxId:b73cc006e86acbbdb4fd391f82012e597b411d2e2a99955226e63c46f802968b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727811049155845041,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a718e9dc3c409631fa8f5dc4d076b18ea96e3aa4e1019102c18b202167818924,PodSandboxId:7eb78842f19b505167c5397f172a99bb1f7b17780e57a3592433042cee608db5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727811049104667513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-5017-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83950c035f12e98c6757b6223f78b7f5a39d863ec5c60eac7c00e820c1c5c076,PodSandboxId:da377024b36877f6c3e94272b41630ae1d13493ca8d22c13b39a58463542dba5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727811048970844751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3382226e00b6ef4a63086f6faaee763c7a138978d0ef813c494eb8ffc1d02c5f,PodSandboxId:f0f8610a34814bf5843b3ba4d4c2b837552c02efdd35a53e341c9b8afa41b367,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727811048981094425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95bc5dbd279eda6d388ec30e614a300b3e7377edf477e235fd68a408b5928575,PodSandboxId:9ddcc19db3580843861d67b547a77d3a8d429b55c0a3c2ee3bba3ad96f6c726b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727811048887857582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d10a89edf2195041bf7b272302c3c39e01a5af56118e22a81c8e75031db83b8b,PodSandboxId:7761291d35db572afa54b59123e615dd81d37ee1fcc9ebe5e1715dd51dcac7c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727811048827632586,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Ann
otations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727810590371392470,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727810449363564194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727810449354599709,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-5017-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727810437214011626,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727810437061941516,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727810424745133125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727810424759398087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=52eeec72-6ff6-4beb-90a5-f4047cf9a84f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.631963361Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d373865c-0b9b-43f6-b3a7-9fca7b63f5f8 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.632051630Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d373865c-0b9b-43f6-b3a7-9fca7b63f5f8 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.633650777Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35ca5d0f-ed6c-42f6-854d-dec0bda88391 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.634109073Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811199634085078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35ca5d0f-ed6c-42f6-854d-dec0bda88391 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.634675944Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=424d8c73-9307-45b4-9a32-d041b88578c1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.634841315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=424d8c73-9307-45b4-9a32-d041b88578c1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.635367115Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42d2996fa57056f337846a0c663c666896bc5623403716bf936f95a745c26751,PodSandboxId:d22b13768ce874522ceed8cc3440fb78e51adfcc55c5d48b10e0a1ba76f2fa79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727811144032445382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc73e66125bdf484da9d957113d2dbcd22b1cca191ae53bdd53cddf4df26a9b4,PodSandboxId:f0f8610a34814bf5843b3ba4d4c2b837552c02efdd35a53e341c9b8afa41b367,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727811095028393362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe0e73911c1f7037af24faf34585ea0f2dd2050508c66f82fceb1d3f63350357,PodSandboxId:d22b13768ce874522ceed8cc3440fb78e51adfcc55c5d48b10e0a1ba76f2fa79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727811095031826420,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0355d034cef455e56a215145c9058bc9694f5da8c3c4c2172ae416d5af558add,PodSandboxId:9ddcc19db3580843861d67b547a77d3a8d429b55c0a3c2ee3bba3ad96f6c726b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727811091033631042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da37e56f1a046294a51f258d6619d188b789c2740337a7586e481feaaca27edc,PodSandboxId:fca6d99cf42a83642203449fa4687750128541f4acf4a54e0e9f868f2262e0e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727811082317090540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:198dab162e8aa5e19a932dc97135bef548d2a2744de22fb4fa9898746a7a9788,PodSandboxId:1bd6f04d79ddbb801bfd04b223d0aa786f5e5f02458dde8c322d43b2862332ad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727811066130351937,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813365ea1e3446cbdf9a69d3a73954fd,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e64eec86b7056617efbd7c498622e3f5958857bac4d73be6899dbf8db5c89cf9,PodSandboxId:9133de98e8a424f31f1f22b1bf4d2d17ac28543288eb487a6118187c62434bd9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727811049246022559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c0e82b0f6c09d87ec13c643737a4ccf5b340e9502946d56bbb217eb96dbe93,PodSandboxId:9c0014ccd9ccba46c670e7c0f6df4fb143de9e63e48a89015e6738bae92e835b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727811048808876184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61ff92cf26d6e3aed32887454ab7d2058de0b8a5e7ea7861f48b1d01a5939727,PodSandboxId:b73cc006e86acbbdb4fd391f82012e597b411d2e2a99955226e63c46f802968b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727811049155845041,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a718e9dc3c409631fa8f5dc4d076b18ea96e3aa4e1019102c18b202167818924,PodSandboxId:7eb78842f19b505167c5397f172a99bb1f7b17780e57a3592433042cee608db5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727811049104667513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-5017-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83950c035f12e98c6757b6223f78b7f5a39d863ec5c60eac7c00e820c1c5c076,PodSandboxId:da377024b36877f6c3e94272b41630ae1d13493ca8d22c13b39a58463542dba5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727811048970844751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3382226e00b6ef4a63086f6faaee763c7a138978d0ef813c494eb8ffc1d02c5f,PodSandboxId:f0f8610a34814bf5843b3ba4d4c2b837552c02efdd35a53e341c9b8afa41b367,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727811048981094425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95bc5dbd279eda6d388ec30e614a300b3e7377edf477e235fd68a408b5928575,PodSandboxId:9ddcc19db3580843861d67b547a77d3a8d429b55c0a3c2ee3bba3ad96f6c726b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727811048887857582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d10a89edf2195041bf7b272302c3c39e01a5af56118e22a81c8e75031db83b8b,PodSandboxId:7761291d35db572afa54b59123e615dd81d37ee1fcc9ebe5e1715dd51dcac7c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727811048827632586,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Ann
otations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727810590371392470,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727810449363564194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727810449354599709,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-5017-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727810437214011626,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727810437061941516,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727810424745133125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727810424759398087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=424d8c73-9307-45b4-9a32-d041b88578c1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.677328918Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=07a3beaf-fa3e-4883-8412-9e6aa4d3dfc1 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.677484611Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=07a3beaf-fa3e-4883-8412-9e6aa4d3dfc1 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.678446069Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b3e4e5a0-a871-44e3-b4dd-b0ce4b3d18ba name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.678922599Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811199678895971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3e4e5a0-a871-44e3-b4dd-b0ce4b3d18ba name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.679459383Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ef7f429-bd25-4d8d-b314-8e8194b5e801 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.679540926Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ef7f429-bd25-4d8d-b314-8e8194b5e801 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:33:19 ha-193737 crio[3761]: time="2024-10-01 19:33:19.679982795Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42d2996fa57056f337846a0c663c666896bc5623403716bf936f95a745c26751,PodSandboxId:d22b13768ce874522ceed8cc3440fb78e51adfcc55c5d48b10e0a1ba76f2fa79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727811144032445382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc73e66125bdf484da9d957113d2dbcd22b1cca191ae53bdd53cddf4df26a9b4,PodSandboxId:f0f8610a34814bf5843b3ba4d4c2b837552c02efdd35a53e341c9b8afa41b367,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727811095028393362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe0e73911c1f7037af24faf34585ea0f2dd2050508c66f82fceb1d3f63350357,PodSandboxId:d22b13768ce874522ceed8cc3440fb78e51adfcc55c5d48b10e0a1ba76f2fa79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727811095031826420,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0355d034cef455e56a215145c9058bc9694f5da8c3c4c2172ae416d5af558add,PodSandboxId:9ddcc19db3580843861d67b547a77d3a8d429b55c0a3c2ee3bba3ad96f6c726b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727811091033631042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da37e56f1a046294a51f258d6619d188b789c2740337a7586e481feaaca27edc,PodSandboxId:fca6d99cf42a83642203449fa4687750128541f4acf4a54e0e9f868f2262e0e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727811082317090540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:198dab162e8aa5e19a932dc97135bef548d2a2744de22fb4fa9898746a7a9788,PodSandboxId:1bd6f04d79ddbb801bfd04b223d0aa786f5e5f02458dde8c322d43b2862332ad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727811066130351937,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813365ea1e3446cbdf9a69d3a73954fd,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e64eec86b7056617efbd7c498622e3f5958857bac4d73be6899dbf8db5c89cf9,PodSandboxId:9133de98e8a424f31f1f22b1bf4d2d17ac28543288eb487a6118187c62434bd9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727811049246022559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c0e82b0f6c09d87ec13c643737a4ccf5b340e9502946d56bbb217eb96dbe93,PodSandboxId:9c0014ccd9ccba46c670e7c0f6df4fb143de9e63e48a89015e6738bae92e835b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727811048808876184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61ff92cf26d6e3aed32887454ab7d2058de0b8a5e7ea7861f48b1d01a5939727,PodSandboxId:b73cc006e86acbbdb4fd391f82012e597b411d2e2a99955226e63c46f802968b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727811049155845041,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a718e9dc3c409631fa8f5dc4d076b18ea96e3aa4e1019102c18b202167818924,PodSandboxId:7eb78842f19b505167c5397f172a99bb1f7b17780e57a3592433042cee608db5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727811049104667513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-5017-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83950c035f12e98c6757b6223f78b7f5a39d863ec5c60eac7c00e820c1c5c076,PodSandboxId:da377024b36877f6c3e94272b41630ae1d13493ca8d22c13b39a58463542dba5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727811048970844751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3382226e00b6ef4a63086f6faaee763c7a138978d0ef813c494eb8ffc1d02c5f,PodSandboxId:f0f8610a34814bf5843b3ba4d4c2b837552c02efdd35a53e341c9b8afa41b367,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727811048981094425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95bc5dbd279eda6d388ec30e614a300b3e7377edf477e235fd68a408b5928575,PodSandboxId:9ddcc19db3580843861d67b547a77d3a8d429b55c0a3c2ee3bba3ad96f6c726b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727811048887857582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d10a89edf2195041bf7b272302c3c39e01a5af56118e22a81c8e75031db83b8b,PodSandboxId:7761291d35db572afa54b59123e615dd81d37ee1fcc9ebe5e1715dd51dcac7c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727811048827632586,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Ann
otations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727810590371392470,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727810449363564194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727810449354599709,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-5017-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727810437214011626,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727810437061941516,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727810424745133125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727810424759398087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ef7f429-bd25-4d8d-b314-8e8194b5e801 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	42d2996fa5705       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      55 seconds ago       Running             storage-provisioner       4                   d22b13768ce87       storage-provisioner
	fe0e73911c1f7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   d22b13768ce87       storage-provisioner
	cc73e66125bdf       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            3                   f0f8610a34814       kube-apiserver-ha-193737
	0355d034cef45       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   2                   9ddcc19db3580       kube-controller-manager-ha-193737
	da37e56f1a046       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   fca6d99cf42a8       busybox-7dff88458-rbjkx
	198dab162e8aa       18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460                                      2 minutes ago        Running             kube-vip                  0                   1bd6f04d79ddb       kube-vip-ha-193737
	e64eec86b7056       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   9133de98e8a42       coredns-7c65d6cfc9-hd5hv
	61ff92cf26d6e       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   b73cc006e86ac       kindnet-wnr6g
	a718e9dc3c409       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   7eb78842f19b5       coredns-7c65d6cfc9-v2wsx
	3382226e00b6e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Exited              kube-apiserver            2                   f0f8610a34814       kube-apiserver-ha-193737
	83950c035f12e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago        Running             kube-scheduler            1                   da377024b3687       kube-scheduler-ha-193737
	95bc5dbd279ed       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Exited              kube-controller-manager   1                   9ddcc19db3580       kube-controller-manager-ha-193737
	d10a89edf2195       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   7761291d35db5       etcd-ha-193737
	82c0e82b0f6c0       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      2 minutes ago        Running             kube-proxy                1                   9c0014ccd9ccb       kube-proxy-zpsll
	d523f1298c385       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   8ddf36dc2effd       busybox-7dff88458-rbjkx
	b9a32cfd9baec       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      12 minutes ago       Exited              coredns                   0                   b4ab4980fd9c6       coredns-7c65d6cfc9-hd5hv
	c598f8345f1d8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      12 minutes ago       Exited              coredns                   0                   69e4ceb6e3399       coredns-7c65d6cfc9-v2wsx
	25b91984e532b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      12 minutes ago       Exited              kindnet-cni               0                   f7fcfb918d1fd       kindnet-wnr6g
	6ce5a1ca06729       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      12 minutes ago       Exited              kube-proxy                0                   65474abfbeabf       kube-proxy-zpsll
	7092a3841df08       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      12 minutes ago       Exited              etcd                      0                   c74bc4df7851a       etcd-ha-193737
	d7d722793679c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      12 minutes ago       Exited              kube-scheduler            0                   4873897c8ffd7       kube-scheduler-ha-193737
	
	
	==> coredns [a718e9dc3c409631fa8f5dc4d076b18ea96e3aa4e1019102c18b202167818924] <==
	Trace[1323641632]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:58164->10.96.0.1:443: read: connection reset by peer 10275ms (19:31:11.375)
	Trace[1323641632]: [10.2758202s] [10.2758202s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:58164->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3] <==
	[INFO] 10.244.2.2:37785 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000112105s
	[INFO] 10.244.0.4:34398 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000118394s
	[INFO] 10.244.0.4:35218 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001965777s
	[INFO] 10.244.1.2:56827 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00018086s
	[INFO] 10.244.1.2:50439 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003922693s
	[INFO] 10.244.2.2:33611 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123417s
	[INFO] 10.244.2.2:37877 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204398s
	[INFO] 10.244.2.2:42894 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164711s
	[INFO] 10.244.0.4:58512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012749s
	[INFO] 10.244.0.4:60496 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126088s
	[INFO] 10.244.0.4:42876 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054151s
	[INFO] 10.244.0.4:46048 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001023388s
	[INFO] 10.244.0.4:45307 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069619s
	[INFO] 10.244.0.4:54830 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086737s
	[INFO] 10.244.1.2:56566 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104818s
	[INFO] 10.244.2.2:44960 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017462s
	[INFO] 10.244.2.2:35520 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000147677s
	[INFO] 10.244.0.4:34887 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000089068s
	[INFO] 10.244.0.4:47038 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093137s
	[INFO] 10.244.1.2:44935 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000181924s
	[INFO] 10.244.2.2:51593 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000184246s
	[INFO] 10.244.2.2:37070 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101666s
	[INFO] 10.244.0.4:49420 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115127s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a] <==
	[INFO] 10.244.2.2:47614 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170182s
	[INFO] 10.244.2.2:52937 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001095974s
	[INFO] 10.244.2.2:59751 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106474s
	[INFO] 10.244.0.4:55786 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001514187s
	[INFO] 10.244.0.4:56387 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000050769s
	[INFO] 10.244.1.2:54787 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013733s
	[INFO] 10.244.1.2:58281 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113165s
	[INFO] 10.244.1.2:48712 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097722s
	[INFO] 10.244.2.2:57237 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152523s
	[INFO] 10.244.2.2:47314 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106445s
	[INFO] 10.244.0.4:43887 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000199016s
	[INFO] 10.244.0.4:49901 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000240769s
	[INFO] 10.244.1.2:54100 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210259s
	[INFO] 10.244.1.2:60342 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000221646s
	[INFO] 10.244.1.2:33783 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000165277s
	[INFO] 10.244.2.2:45378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197846s
	[INFO] 10.244.2.2:33324 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101556s
	[INFO] 10.244.0.4:40016 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000071122s
	[INFO] 10.244.0.4:40114 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135338s
	[INFO] 10.244.0.4:53904 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00006854s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e64eec86b7056617efbd7c498622e3f5958857bac4d73be6899dbf8db5c89cf9] <==
	Trace[2098793411]: [17.084408074s] [17.084408074s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41256->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41256->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-193737
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-193737
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=ha-193737
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T19_20_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:20:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-193737
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:33:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:31:31 +0000   Tue, 01 Oct 2024 19:20:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:31:31 +0000   Tue, 01 Oct 2024 19:20:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:31:31 +0000   Tue, 01 Oct 2024 19:20:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:31:31 +0000   Tue, 01 Oct 2024 19:20:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    ha-193737
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 008c1ccd624b4ab3b90055ff9f65b018
	  System UUID:                008c1ccd-624b-4ab3-b900-55ff9f65b018
	  Boot ID:                    ad12c9f1-7a18-4d35-9ec9-00d91da3365b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rbjkx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-hd5hv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 coredns-7c65d6cfc9-v2wsx             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 etcd-ha-193737                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-wnr6g                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-193737             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-193737    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-zpsll                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-193737             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-193737                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 107s                   kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-193737 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-193737 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-193737 status is now: NodeHasSufficientMemory
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     12m                    kubelet          Node ha-193737 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                    kubelet          Node ha-193737 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                    kubelet          Node ha-193737 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           12m                    node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-193737 status is now: NodeReady
	  Normal   RegisteredNode           11m                    node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	  Warning  ContainerGCFailed        2m50s (x2 over 3m50s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             2m38s (x3 over 3m27s)  kubelet          Node ha-193737 status is now: NodeNotReady
	  Normal   RegisteredNode           112s                   node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	  Normal   RegisteredNode           100s                   node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	  Normal   RegisteredNode           40s                    node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	
	
	Name:               ha-193737-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-193737-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=ha-193737
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T19_21_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:21:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-193737-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:33:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:32:14 +0000   Tue, 01 Oct 2024 19:31:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:32:14 +0000   Tue, 01 Oct 2024 19:31:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:32:14 +0000   Tue, 01 Oct 2024 19:31:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:32:14 +0000   Tue, 01 Oct 2024 19:31:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-193737-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e20c76476d7c4acaa5fd75e5b8fa3bab
	  System UUID:                e20c7647-6d7c-4aca-a5fd-75e5b8fa3bab
	  Boot ID:                    8c6fc033-e543-4e30-847d-834cbaf17d73
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fz5bb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-193737-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-drdlr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-193737-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-193737-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-4294m                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-193737-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-193737-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 88s                    kube-proxy       
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-193737-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-193737-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node ha-193737-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                    node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	  Normal  NodeNotReady             8m20s                  node-controller  Node ha-193737-m02 status is now: NodeNotReady
	  Normal  Starting                 2m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m16s (x8 over 2m16s)  kubelet          Node ha-193737-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m16s (x8 over 2m16s)  kubelet          Node ha-193737-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m16s (x7 over 2m16s)  kubelet          Node ha-193737-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           112s                   node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	  Normal  RegisteredNode           100s                   node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	  Normal  RegisteredNode           40s                    node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	
	
	Name:               ha-193737-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-193737-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=ha-193737
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T19_22_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:22:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-193737-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:33:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:32:52 +0000   Tue, 01 Oct 2024 19:32:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:32:52 +0000   Tue, 01 Oct 2024 19:32:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:32:52 +0000   Tue, 01 Oct 2024 19:32:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:32:52 +0000   Tue, 01 Oct 2024 19:32:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-193737-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f175e16bf19e4217880e926a75ac0065
	  System UUID:                f175e16b-f19e-4217-880e-926a75ac0065
	  Boot ID:                    da97e82a-9fd9-402d-a00c-bb2c5f9a0181
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qzzzv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-193737-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-bqht8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-ha-193737-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-193737-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-9pm4t                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-193737-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-193737-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 42s                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-193737-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-193737-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-193737-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-193737-m03 event: Registered Node ha-193737-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-193737-m03 event: Registered Node ha-193737-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-193737-m03 event: Registered Node ha-193737-m03 in Controller
	  Normal   RegisteredNode           112s               node-controller  Node ha-193737-m03 event: Registered Node ha-193737-m03 in Controller
	  Normal   RegisteredNode           100s               node-controller  Node ha-193737-m03 event: Registered Node ha-193737-m03 in Controller
	  Normal   NodeNotReady             72s                node-controller  Node ha-193737-m03 status is now: NodeNotReady
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 59s                kubelet          Node ha-193737-m03 has been rebooted, boot id: da97e82a-9fd9-402d-a00c-bb2c5f9a0181
	  Normal   NodeHasSufficientMemory  59s (x2 over 59s)  kubelet          Node ha-193737-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x2 over 59s)  kubelet          Node ha-193737-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x2 over 59s)  kubelet          Node ha-193737-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                59s                kubelet          Node ha-193737-m03 status is now: NodeReady
	  Normal   RegisteredNode           40s                node-controller  Node ha-193737-m03 event: Registered Node ha-193737-m03 in Controller
	
	
	Name:               ha-193737-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-193737-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=ha-193737
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T19_23_47_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:23:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-193737-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:33:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:33:12 +0000   Tue, 01 Oct 2024 19:33:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:33:12 +0000   Tue, 01 Oct 2024 19:33:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:33:12 +0000   Tue, 01 Oct 2024 19:33:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:33:12 +0000   Tue, 01 Oct 2024 19:33:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.152
	  Hostname:    ha-193737-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef1097b5e0604ff19d7361f2921010b9
	  System UUID:                ef1097b5-e060-4ff1-9d73-61f2921010b9
	  Boot ID:                    2719bddd-154a-445f-a3e2-5e319deb1327
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-h886q       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m34s
	  kube-system                 kube-proxy-hz2nn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4s                     kube-proxy       
	  Normal   Starting                 9m29s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  9m34s (x3 over 9m34s)  kubelet          Node ha-193737-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m34s (x3 over 9m34s)  kubelet          Node ha-193737-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m34s (x3 over 9m34s)  kubelet          Node ha-193737-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m30s                  node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal   RegisteredNode           9m29s                  node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal   RegisteredNode           9m29s                  node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal   NodeReady                9m14s                  kubelet          Node ha-193737-m04 status is now: NodeReady
	  Normal   RegisteredNode           112s                   node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal   RegisteredNode           100s                   node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal   NodeNotReady             72s                    node-controller  Node ha-193737-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           40s                    node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal   Starting                 8s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                     kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 8s                     kubelet          Node ha-193737-m04 has been rebooted, boot id: 2719bddd-154a-445f-a3e2-5e319deb1327
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)        kubelet          Node ha-193737-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)        kubelet          Node ha-193737-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)        kubelet          Node ha-193737-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                8s                     kubelet          Node ha-193737-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.804167] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.059657] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065329] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.157689] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.148971] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.256595] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +3.897654] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +5.026995] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.059544] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.061605] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.119912] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.150839] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.375138] kauditd_printk_skb: 38 callbacks suppressed
	[Oct 1 19:21] kauditd_printk_skb: 24 callbacks suppressed
	[Oct 1 19:30] systemd-fstab-generator[3686]: Ignoring "noauto" option for root device
	[  +0.146997] systemd-fstab-generator[3698]: Ignoring "noauto" option for root device
	[  +0.178415] systemd-fstab-generator[3712]: Ignoring "noauto" option for root device
	[  +0.155876] systemd-fstab-generator[3724]: Ignoring "noauto" option for root device
	[  +0.286412] systemd-fstab-generator[3752]: Ignoring "noauto" option for root device
	[  +0.739524] systemd-fstab-generator[3849]: Ignoring "noauto" option for root device
	[  +5.551763] kauditd_printk_skb: 122 callbacks suppressed
	[Oct 1 19:31] kauditd_printk_skb: 85 callbacks suppressed
	[  +5.445779] kauditd_printk_skb: 2 callbacks suppressed
	[ +25.734051] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e] <==
	{"level":"info","ts":"2024-10-01T19:29:09.867822Z","caller":"traceutil/trace.go:171","msg":"trace[1059982064] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; }","duration":"735.738205ms","start":"2024-10-01T19:29:09.132078Z","end":"2024-10-01T19:29:09.867816Z","steps":["trace[1059982064] 'agreement among raft nodes before linearized reading'  (duration: 735.687491ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T19:29:09.867851Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T19:29:09.132068Z","time spent":"735.77607ms","remote":"127.0.0.1:42554","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" limit:10000 "}
	2024/10/01 19:29:09 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-10-01T19:29:09.998558Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.14:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-01T19:29:09.998685Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.14:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-01T19:29:10.000488Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"599035dfeb7e0476","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-10-01T19:29:10.000787Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"be719cfe4c1d88a"}
	{"level":"info","ts":"2024-10-01T19:29:10.000825Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"be719cfe4c1d88a"}
	{"level":"info","ts":"2024-10-01T19:29:10.000859Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"be719cfe4c1d88a"}
	{"level":"info","ts":"2024-10-01T19:29:10.000972Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a"}
	{"level":"info","ts":"2024-10-01T19:29:10.001017Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a"}
	{"level":"info","ts":"2024-10-01T19:29:10.001063Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a"}
	{"level":"info","ts":"2024-10-01T19:29:10.001074Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"be719cfe4c1d88a"}
	{"level":"info","ts":"2024-10-01T19:29:10.001080Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:29:10.001092Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:29:10.001124Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:29:10.001213Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:29:10.001253Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:29:10.001305Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:29:10.001316Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:29:10.004454Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.14:2380"}
	{"level":"warn","ts":"2024-10-01T19:29:10.004487Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.868666744s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-10-01T19:29:10.004638Z","caller":"traceutil/trace.go:171","msg":"trace[25566447] range","detail":"{range_begin:; range_end:; }","duration":"8.868838929s","start":"2024-10-01T19:29:01.135783Z","end":"2024-10-01T19:29:10.004622Z","steps":["trace[25566447] 'agreement among raft nodes before linearized reading'  (duration: 8.868664936s)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T19:29:10.004685Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.14:2380"}
	{"level":"info","ts":"2024-10-01T19:29:10.004772Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-193737","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.14:2380"],"advertise-client-urls":["https://192.168.39.14:2379"]}
	
	
	==> etcd [d10a89edf2195041bf7b272302c3c39e01a5af56118e22a81c8e75031db83b8b] <==
	{"level":"warn","ts":"2024-10-01T19:32:15.821296Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:32:15.921004Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:32:15.995453Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:32:16.000115Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:32:16.002443Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:32:16.021391Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:32:16.121318Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:32:16.221286Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"599035dfeb7e0476","from":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T19:32:18.113645Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.101:2380/version","remote-member-id":"e0aed16a49605245","error":"Get \"https://192.168.39.101:2380/version\": dial tcp 192.168.39.101:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-01T19:32:18.113851Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e0aed16a49605245","error":"Get \"https://192.168.39.101:2380/version\": dial tcp 192.168.39.101:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-01T19:32:19.718070Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e0aed16a49605245","rtt":"0s","error":"dial tcp 192.168.39.101:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-01T19:32:19.718172Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e0aed16a49605245","rtt":"0s","error":"dial tcp 192.168.39.101:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-01T19:32:22.116484Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.101:2380/version","remote-member-id":"e0aed16a49605245","error":"Get \"https://192.168.39.101:2380/version\": dial tcp 192.168.39.101:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-01T19:32:22.116632Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e0aed16a49605245","error":"Get \"https://192.168.39.101:2380/version\": dial tcp 192.168.39.101:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-01T19:32:24.719156Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e0aed16a49605245","rtt":"0s","error":"dial tcp 192.168.39.101:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-01T19:32:24.719252Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e0aed16a49605245","rtt":"0s","error":"dial tcp 192.168.39.101:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-01T19:32:26.119059Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.101:2380/version","remote-member-id":"e0aed16a49605245","error":"Get \"https://192.168.39.101:2380/version\": dial tcp 192.168.39.101:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-01T19:32:26.119205Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e0aed16a49605245","error":"Get \"https://192.168.39.101:2380/version\": dial tcp 192.168.39.101:2380: connect: connection refused"}
	{"level":"info","ts":"2024-10-01T19:32:26.693541Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:32:26.694686Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:32:26.696520Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:32:26.708449Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"599035dfeb7e0476","to":"e0aed16a49605245","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-10-01T19:32:26.708510Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:32:26.709345Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"599035dfeb7e0476","to":"e0aed16a49605245","stream-type":"stream Message"}
	{"level":"info","ts":"2024-10-01T19:32:26.709421Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245"}
	
	
	==> kernel <==
	 19:33:20 up 13 min,  0 users,  load average: 0.39, 0.48, 0.34
	Linux ha-193737 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525] <==
	I1001 19:28:38.345531       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:28:48.345611       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I1001 19:28:48.345677       1 main.go:322] Node ha-193737-m03 has CIDR [10.244.2.0/24] 
	I1001 19:28:48.345941       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:28:48.345961       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:28:48.346018       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:28:48.346034       1 main.go:299] handling current node
	I1001 19:28:48.346045       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:28:48.346050       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:28:58.354002       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:28:58.354056       1 main.go:299] handling current node
	I1001 19:28:58.354085       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:28:58.354092       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:28:58.354304       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I1001 19:28:58.354320       1 main.go:322] Node ha-193737-m03 has CIDR [10.244.2.0/24] 
	I1001 19:28:58.354370       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:28:58.354375       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:29:08.354044       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:29:08.354147       1 main.go:299] handling current node
	I1001 19:29:08.354184       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:29:08.354190       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:29:08.354329       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I1001 19:29:08.354348       1 main.go:322] Node ha-193737-m03 has CIDR [10.244.2.0/24] 
	I1001 19:29:08.354421       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:29:08.354437       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [61ff92cf26d6e3aed32887454ab7d2058de0b8a5e7ea7861f48b1d01a5939727] <==
	I1001 19:32:50.065423       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:33:00.065977       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:33:00.066076       1 main.go:299] handling current node
	I1001 19:33:00.066114       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:33:00.066128       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:33:00.066298       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I1001 19:33:00.066318       1 main.go:322] Node ha-193737-m03 has CIDR [10.244.2.0/24] 
	I1001 19:33:00.066437       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:33:00.066456       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:33:10.074221       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:33:10.074339       1 main.go:299] handling current node
	I1001 19:33:10.074374       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:33:10.074383       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:33:10.074554       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I1001 19:33:10.074576       1 main.go:322] Node ha-193737-m03 has CIDR [10.244.2.0/24] 
	I1001 19:33:10.074630       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:33:10.074646       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:33:20.068437       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:33:20.068504       1 main.go:299] handling current node
	I1001 19:33:20.068542       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:33:20.068551       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:33:20.068789       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I1001 19:33:20.068796       1 main.go:322] Node ha-193737-m03 has CIDR [10.244.2.0/24] 
	I1001 19:33:20.068873       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:33:20.068879       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [3382226e00b6ef4a63086f6faaee763c7a138978d0ef813c494eb8ffc1d02c5f] <==
	I1001 19:30:49.576316       1 options.go:228] external host was not specified, using 192.168.39.14
	I1001 19:30:49.584942       1 server.go:142] Version: v1.31.1
	I1001 19:30:49.585046       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:30:50.301863       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1001 19:30:50.357816       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1001 19:30:50.362894       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1001 19:30:50.362927       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1001 19:30:50.363226       1 instance.go:232] Using reconciler: lease
	W1001 19:31:10.301595       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1001 19:31:10.301596       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1001 19:31:10.370298       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [cc73e66125bdf484da9d957113d2dbcd22b1cca191ae53bdd53cddf4df26a9b4] <==
	I1001 19:31:37.189083       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1001 19:31:37.189121       1 policy_source.go:224] refreshing policies
	I1001 19:31:37.206958       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1001 19:31:37.229166       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1001 19:31:37.234774       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1001 19:31:37.234855       1 aggregator.go:171] initial CRD sync complete...
	I1001 19:31:37.234871       1 autoregister_controller.go:144] Starting autoregister controller
	I1001 19:31:37.234877       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1001 19:31:37.234883       1 cache.go:39] Caches are synced for autoregister controller
	I1001 19:31:37.235574       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1001 19:31:37.236173       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1001 19:31:37.236202       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1001 19:31:37.236931       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1001 19:31:37.237042       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1001 19:31:37.237265       1 shared_informer.go:320] Caches are synced for configmaps
	I1001 19:31:37.238948       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W1001 19:31:37.244200       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.27]
	I1001 19:31:37.245455       1 controller.go:615] quota admission added evaluator for: endpoints
	I1001 19:31:37.253871       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1001 19:31:37.257867       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1001 19:31:37.278989       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1001 19:31:38.134497       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1001 19:31:38.574995       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.14 192.168.39.27]
	W1001 19:32:38.576958       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.101 192.168.39.14 192.168.39.27]
	E1001 19:32:38.582416       1 controller.go:163] "Unhandled Error" err="unable to sync kubernetes service: Operation cannot be fulfilled on endpoints \"kubernetes\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-controller-manager [0355d034cef455e56a215145c9058bc9694f5da8c3c4c2172ae416d5af558add] <==
	I1001 19:31:59.176871       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="55.664µs"
	I1001 19:32:08.691863       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:32:08.691992       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m03"
	I1001 19:32:08.723119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:32:08.723491       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m03"
	I1001 19:32:08.776864       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.861651ms"
	I1001 19:32:08.778347       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="54.312µs"
	I1001 19:32:11.006218       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m03"
	I1001 19:32:13.956043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m03"
	I1001 19:32:14.013005       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m02"
	I1001 19:32:21.090480       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:32:21.575407       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m03"
	I1001 19:32:21.592366       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m03"
	I1001 19:32:22.613017       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="172.039µs"
	I1001 19:32:23.910040       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m03"
	I1001 19:32:24.048207       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:32:40.491100       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:32:40.579233       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:32:41.674941       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.821564ms"
	I1001 19:32:41.675020       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.238µs"
	I1001 19:32:52.314115       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m03"
	I1001 19:33:12.520289       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:33:12.520472       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-193737-m04"
	I1001 19:33:12.541459       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:33:13.933910       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	
	
	==> kube-controller-manager [95bc5dbd279eda6d388ec30e614a300b3e7377edf477e235fd68a408b5928575] <==
	I1001 19:30:50.517964       1 serving.go:386] Generated self-signed cert in-memory
	I1001 19:30:50.838702       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I1001 19:30:50.838844       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:30:50.840571       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1001 19:30:50.840800       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1001 19:30:50.841759       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1001 19:30:50.842333       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1001 19:31:11.377063       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.14:8443/healthz\": dial tcp 192.168.39.14:8443: connect: connection refused"
	
	
	==> kube-proxy [6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c] <==
	E1001 19:28:04.776620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-193737&resourceVersion=1760\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:07.846194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:07.846776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1831\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:07.846992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:07.847057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1760\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:07.847329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-193737&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:07.847806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-193737&resourceVersion=1760\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:13.991545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:13.991601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1760\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:13.991689       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:13.991753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1831\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:13.991826       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-193737&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:13.991861       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-193737&resourceVersion=1760\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:23.207233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:23.207411       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1831\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:23.207557       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-193737&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:23.207600       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-193737&resourceVersion=1760\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:26.279074       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:26.279198       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1760\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:38.566889       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-193737&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:38.567082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-193737&resourceVersion=1760\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:44.712231       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:44.712915       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1831\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:47.786071       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:47.791881       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1760\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [82c0e82b0f6c09d87ec13c643737a4ccf5b340e9502946d56bbb217eb96dbe93] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 19:30:50.666872       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-193737\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E1001 19:30:53.734927       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-193737\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E1001 19:30:56.807382       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-193737\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E1001 19:31:02.950263       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-193737\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E1001 19:31:15.239758       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-193737\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I1001 19:31:32.767297       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.14"]
	E1001 19:31:32.767596       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 19:31:32.817907       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 19:31:32.818028       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 19:31:32.818073       1 server_linux.go:169] "Using iptables Proxier"
	I1001 19:31:32.822309       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 19:31:32.822898       1 server.go:483] "Version info" version="v1.31.1"
	I1001 19:31:32.823310       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:31:32.827092       1 config.go:199] "Starting service config controller"
	I1001 19:31:32.827249       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 19:31:32.827354       1 config.go:105] "Starting endpoint slice config controller"
	I1001 19:31:32.827434       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 19:31:32.828915       1 config.go:328] "Starting node config controller"
	I1001 19:31:32.829066       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 19:31:32.928570       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 19:31:32.928625       1 shared_informer.go:320] Caches are synced for service config
	I1001 19:31:32.929159       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [83950c035f12e98c6757b6223f78b7f5a39d863ec5c60eac7c00e820c1c5c076] <==
	W1001 19:31:29.012278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.14:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.14:8443: connect: connection refused
	E1001 19:31:29.012334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.14:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.14:8443: connect: connection refused" logger="UnhandledError"
	W1001 19:31:29.307836       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.14:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.14:8443: connect: connection refused
	E1001 19:31:29.307955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.14:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.14:8443: connect: connection refused" logger="UnhandledError"
	W1001 19:31:29.471299       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.14:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.14:8443: connect: connection refused
	E1001 19:31:29.471418       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.14:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.14:8443: connect: connection refused" logger="UnhandledError"
	W1001 19:31:29.559295       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.14:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.14:8443: connect: connection refused
	E1001 19:31:29.559427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.14:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.14:8443: connect: connection refused" logger="UnhandledError"
	W1001 19:31:30.176066       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.14:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.14:8443: connect: connection refused
	E1001 19:31:30.176146       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.14:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.14:8443: connect: connection refused" logger="UnhandledError"
	W1001 19:31:30.955508       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.14:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.14:8443: connect: connection refused
	E1001 19:31:30.955620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.14:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.14:8443: connect: connection refused" logger="UnhandledError"
	W1001 19:31:31.015208       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.14:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.14:8443: connect: connection refused
	E1001 19:31:31.015296       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.14:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.14:8443: connect: connection refused" logger="UnhandledError"
	W1001 19:31:31.136450       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.14:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.14:8443: connect: connection refused
	E1001 19:31:31.136518       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.14:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.14:8443: connect: connection refused" logger="UnhandledError"
	W1001 19:31:31.619353       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.14:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.14:8443: connect: connection refused
	E1001 19:31:31.619410       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.14:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.14:8443: connect: connection refused" logger="UnhandledError"
	W1001 19:31:31.846120       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.14:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.14:8443: connect: connection refused
	E1001 19:31:31.846257       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.14:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.14:8443: connect: connection refused" logger="UnhandledError"
	W1001 19:31:32.824494       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.14:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.14:8443: connect: connection refused
	E1001 19:31:32.824636       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.14:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.14:8443: connect: connection refused" logger="UnhandledError"
	W1001 19:31:33.558477       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.14:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.14:8443: connect: connection refused
	E1001 19:31:33.558622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.14:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.14:8443: connect: connection refused" logger="UnhandledError"
	I1001 19:31:55.786797       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7] <==
	E1001 19:23:47.081864       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 785d6c85-2697-4f02-80a4-55483a0faa64(kube-system/kube-proxy-z5qhk) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-z5qhk"
	E1001 19:23:47.081920       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-z5qhk\": pod kube-proxy-z5qhk is already assigned to node \"ha-193737-m04\"" pod="kube-system/kube-proxy-z5qhk"
	I1001 19:23:47.083299       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-z5qhk" node="ha-193737-m04"
	E1001 19:23:47.138476       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4q2pc\": pod kindnet-4q2pc is already assigned to node \"ha-193737-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4q2pc" node="ha-193737-m04"
	E1001 19:23:47.138649       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f23b02a5-c64e-44c3-83b9-7192d19a6efc(kube-system/kindnet-4q2pc) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-4q2pc"
	E1001 19:23:47.138779       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4q2pc\": pod kindnet-4q2pc is already assigned to node \"ha-193737-m04\"" pod="kube-system/kindnet-4q2pc"
	I1001 19:23:47.138823       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-4q2pc" node="ha-193737-m04"
	E1001 19:28:48.062122       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E1001 19:28:56.022025       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E1001 19:28:56.308818       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E1001 19:28:59.265228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E1001 19:28:59.619153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E1001 19:28:59.734866       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E1001 19:29:00.233835       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E1001 19:29:00.942769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E1001 19:29:02.996497       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E1001 19:29:04.540046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E1001 19:29:05.100416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E1001 19:29:05.312911       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E1001 19:29:05.856067       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E1001 19:29:07.264696       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E1001 19:29:09.117591       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	W1001 19:29:09.269390       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 19:29:09.269441       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1001 19:29:09.826057       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 01 19:32:10 ha-193737 kubelet[1313]: I1001 19:32:10.040934    1313 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-193737"
	Oct 01 19:32:10 ha-193737 kubelet[1313]: I1001 19:32:10.561201    1313 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-193737" podUID="cbe8e6a4-08f3-4db3-af4d-810a5592597c"
	Oct 01 19:32:11 ha-193737 kubelet[1313]: E1001 19:32:11.196131    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811131195577452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:32:11 ha-193737 kubelet[1313]: E1001 19:32:11.196470    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811131195577452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:32:13 ha-193737 kubelet[1313]: I1001 19:32:13.014523    1313 scope.go:117] "RemoveContainer" containerID="fe0e73911c1f7037af24faf34585ea0f2dd2050508c66f82fceb1d3f63350357"
	Oct 01 19:32:13 ha-193737 kubelet[1313]: E1001 19:32:13.014685    1313 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d5b587a6-418b-47e5-9bf7-3fb6fa5e3372)\"" pod="kube-system/storage-provisioner" podUID="d5b587a6-418b-47e5-9bf7-3fb6fa5e3372"
	Oct 01 19:32:21 ha-193737 kubelet[1313]: E1001 19:32:21.198088    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811141197665905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:32:21 ha-193737 kubelet[1313]: E1001 19:32:21.198532    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811141197665905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:32:24 ha-193737 kubelet[1313]: I1001 19:32:24.014945    1313 scope.go:117] "RemoveContainer" containerID="fe0e73911c1f7037af24faf34585ea0f2dd2050508c66f82fceb1d3f63350357"
	Oct 01 19:32:24 ha-193737 kubelet[1313]: I1001 19:32:24.664901    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-193737" podStartSLOduration=14.664869416 podStartE2EDuration="14.664869416s" podCreationTimestamp="2024-10-01 19:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-01 19:32:11.032649162 +0000 UTC m=+700.230698690" watchObservedRunningTime="2024-10-01 19:32:24.664869416 +0000 UTC m=+713.862918943"
	Oct 01 19:32:31 ha-193737 kubelet[1313]: E1001 19:32:31.048365    1313 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 19:32:31 ha-193737 kubelet[1313]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 19:32:31 ha-193737 kubelet[1313]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 19:32:31 ha-193737 kubelet[1313]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 19:32:31 ha-193737 kubelet[1313]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 19:32:31 ha-193737 kubelet[1313]: E1001 19:32:31.201215    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811151200663683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:32:31 ha-193737 kubelet[1313]: E1001 19:32:31.201241    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811151200663683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:32:41 ha-193737 kubelet[1313]: E1001 19:32:41.204547    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811161203909792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:32:41 ha-193737 kubelet[1313]: E1001 19:32:41.205076    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811161203909792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:32:51 ha-193737 kubelet[1313]: E1001 19:32:51.207874    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811171207320397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:32:51 ha-193737 kubelet[1313]: E1001 19:32:51.207956    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811171207320397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:33:01 ha-193737 kubelet[1313]: E1001 19:33:01.210513    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811181210115998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:33:01 ha-193737 kubelet[1313]: E1001 19:33:01.210541    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811181210115998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:33:11 ha-193737 kubelet[1313]: E1001 19:33:11.212687    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811191212110813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:33:11 ha-193737 kubelet[1313]: E1001 19:33:11.213080    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811191212110813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 19:33:19.192172   38787 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19736-11198/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-193737 -n ha-193737
helpers_test.go:261: (dbg) Run:  kubectl --context ha-193737 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (374.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-193737 stop -v=7 --alsologtostderr: exit status 82 (2m0.48320388s)

                                                
                                                
-- stdout --
	* Stopping node "ha-193737-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 19:33:39.051401   39225 out.go:345] Setting OutFile to fd 1 ...
	I1001 19:33:39.051546   39225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:33:39.051558   39225 out.go:358] Setting ErrFile to fd 2...
	I1001 19:33:39.051565   39225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:33:39.051834   39225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 19:33:39.052166   39225 out.go:352] Setting JSON to false
	I1001 19:33:39.052280   39225 mustload.go:65] Loading cluster: ha-193737
	I1001 19:33:39.052848   39225 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:33:39.052985   39225 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:33:39.053225   39225 mustload.go:65] Loading cluster: ha-193737
	I1001 19:33:39.053431   39225 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:33:39.053475   39225 stop.go:39] StopHost: ha-193737-m04
	I1001 19:33:39.054021   39225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:33:39.054082   39225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:33:39.070400   39225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39241
	I1001 19:33:39.071249   39225 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:33:39.072421   39225 main.go:141] libmachine: Using API Version  1
	I1001 19:33:39.072452   39225 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:33:39.072841   39225 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:33:39.074871   39225 out.go:177] * Stopping node "ha-193737-m04"  ...
	I1001 19:33:39.076440   39225 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1001 19:33:39.076471   39225 main.go:141] libmachine: (ha-193737-m04) Calling .DriverName
	I1001 19:33:39.076784   39225 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1001 19:33:39.076808   39225 main.go:141] libmachine: (ha-193737-m04) Calling .GetSSHHostname
	I1001 19:33:39.079875   39225 main.go:141] libmachine: (ha-193737-m04) DBG | domain ha-193737-m04 has defined MAC address 52:54:00:18:e8:54 in network mk-ha-193737
	I1001 19:33:39.080323   39225 main.go:141] libmachine: (ha-193737-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:e8:54", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:33:06 +0000 UTC Type:0 Mac:52:54:00:18:e8:54 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-193737-m04 Clientid:01:52:54:00:18:e8:54}
	I1001 19:33:39.080346   39225 main.go:141] libmachine: (ha-193737-m04) DBG | domain ha-193737-m04 has defined IP address 192.168.39.152 and MAC address 52:54:00:18:e8:54 in network mk-ha-193737
	I1001 19:33:39.080541   39225 main.go:141] libmachine: (ha-193737-m04) Calling .GetSSHPort
	I1001 19:33:39.080727   39225 main.go:141] libmachine: (ha-193737-m04) Calling .GetSSHKeyPath
	I1001 19:33:39.080867   39225 main.go:141] libmachine: (ha-193737-m04) Calling .GetSSHUsername
	I1001 19:33:39.081043   39225 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737-m04/id_rsa Username:docker}
	I1001 19:33:39.162807   39225 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1001 19:33:39.216295   39225 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1001 19:33:39.268783   39225 main.go:141] libmachine: Stopping "ha-193737-m04"...
	I1001 19:33:39.268820   39225 main.go:141] libmachine: (ha-193737-m04) Calling .GetState
	I1001 19:33:39.270324   39225 main.go:141] libmachine: (ha-193737-m04) Calling .Stop
	I1001 19:33:39.273776   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 0/120
	I1001 19:33:40.275107   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 1/120
	I1001 19:33:41.276375   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 2/120
	I1001 19:33:42.277666   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 3/120
	I1001 19:33:43.279349   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 4/120
	I1001 19:33:44.281359   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 5/120
	I1001 19:33:45.283527   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 6/120
	I1001 19:33:46.285954   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 7/120
	I1001 19:33:47.287139   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 8/120
	I1001 19:33:48.289405   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 9/120
	I1001 19:33:49.291368   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 10/120
	I1001 19:33:50.292918   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 11/120
	I1001 19:33:51.294407   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 12/120
	I1001 19:33:52.295879   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 13/120
	I1001 19:33:53.298040   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 14/120
	I1001 19:33:54.300056   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 15/120
	I1001 19:33:55.301460   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 16/120
	I1001 19:33:56.302709   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 17/120
	I1001 19:33:57.304111   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 18/120
	I1001 19:33:58.305330   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 19/120
	I1001 19:33:59.306432   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 20/120
	I1001 19:34:00.308190   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 21/120
	I1001 19:34:01.309606   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 22/120
	I1001 19:34:02.311244   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 23/120
	I1001 19:34:03.312698   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 24/120
	I1001 19:34:04.314792   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 25/120
	I1001 19:34:05.316694   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 26/120
	I1001 19:34:06.319517   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 27/120
	I1001 19:34:07.320966   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 28/120
	I1001 19:34:08.322996   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 29/120
	I1001 19:34:09.325220   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 30/120
	I1001 19:34:10.326714   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 31/120
	I1001 19:34:11.328449   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 32/120
	I1001 19:34:12.330296   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 33/120
	I1001 19:34:13.332119   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 34/120
	I1001 19:34:14.334113   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 35/120
	I1001 19:34:15.336167   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 36/120
	I1001 19:34:16.337554   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 37/120
	I1001 19:34:17.339924   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 38/120
	I1001 19:34:18.341081   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 39/120
	I1001 19:34:19.343230   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 40/120
	I1001 19:34:20.344799   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 41/120
	I1001 19:34:21.347394   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 42/120
	I1001 19:34:22.349133   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 43/120
	I1001 19:34:23.350652   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 44/120
	I1001 19:34:24.352609   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 45/120
	I1001 19:34:25.355107   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 46/120
	I1001 19:34:26.356775   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 47/120
	I1001 19:34:27.358802   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 48/120
	I1001 19:34:28.360822   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 49/120
	I1001 19:34:29.362481   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 50/120
	I1001 19:34:30.363954   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 51/120
	I1001 19:34:31.365561   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 52/120
	I1001 19:34:32.366951   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 53/120
	I1001 19:34:33.368480   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 54/120
	I1001 19:34:34.370488   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 55/120
	I1001 19:34:35.372291   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 56/120
	I1001 19:34:36.373786   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 57/120
	I1001 19:34:37.375280   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 58/120
	I1001 19:34:38.377069   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 59/120
	I1001 19:34:39.379570   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 60/120
	I1001 19:34:40.381012   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 61/120
	I1001 19:34:41.382400   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 62/120
	I1001 19:34:42.383667   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 63/120
	I1001 19:34:43.385160   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 64/120
	I1001 19:34:44.386996   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 65/120
	I1001 19:34:45.388486   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 66/120
	I1001 19:34:46.390169   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 67/120
	I1001 19:34:47.391378   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 68/120
	I1001 19:34:48.393110   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 69/120
	I1001 19:34:49.395774   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 70/120
	I1001 19:34:50.397158   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 71/120
	I1001 19:34:51.398696   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 72/120
	I1001 19:34:52.400155   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 73/120
	I1001 19:34:53.401870   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 74/120
	I1001 19:34:54.403696   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 75/120
	I1001 19:34:55.405231   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 76/120
	I1001 19:34:56.407105   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 77/120
	I1001 19:34:57.409352   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 78/120
	I1001 19:34:58.410920   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 79/120
	I1001 19:34:59.413157   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 80/120
	I1001 19:35:00.414777   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 81/120
	I1001 19:35:01.416049   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 82/120
	I1001 19:35:02.417595   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 83/120
	I1001 19:35:03.418858   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 84/120
	I1001 19:35:04.420415   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 85/120
	I1001 19:35:05.421871   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 86/120
	I1001 19:35:06.423364   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 87/120
	I1001 19:35:07.424873   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 88/120
	I1001 19:35:08.426394   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 89/120
	I1001 19:35:09.428867   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 90/120
	I1001 19:35:10.430527   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 91/120
	I1001 19:35:11.432072   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 92/120
	I1001 19:35:12.433860   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 93/120
	I1001 19:35:13.435372   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 94/120
	I1001 19:35:14.437510   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 95/120
	I1001 19:35:15.439216   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 96/120
	I1001 19:35:16.441010   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 97/120
	I1001 19:35:17.442467   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 98/120
	I1001 19:35:18.445233   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 99/120
	I1001 19:35:19.447960   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 100/120
	I1001 19:35:20.449682   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 101/120
	I1001 19:35:21.451652   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 102/120
	I1001 19:35:22.453093   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 103/120
	I1001 19:35:23.455227   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 104/120
	I1001 19:35:24.457613   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 105/120
	I1001 19:35:25.459192   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 106/120
	I1001 19:35:26.460732   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 107/120
	I1001 19:35:27.463573   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 108/120
	I1001 19:35:28.465442   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 109/120
	I1001 19:35:29.467859   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 110/120
	I1001 19:35:30.469839   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 111/120
	I1001 19:35:31.471406   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 112/120
	I1001 19:35:32.473112   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 113/120
	I1001 19:35:33.475010   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 114/120
	I1001 19:35:34.476893   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 115/120
	I1001 19:35:35.478503   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 116/120
	I1001 19:35:36.479910   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 117/120
	I1001 19:35:37.481502   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 118/120
	I1001 19:35:38.482858   39225 main.go:141] libmachine: (ha-193737-m04) Waiting for machine to stop 119/120
	I1001 19:35:39.483500   39225 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1001 19:35:39.483559   39225 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1001 19:35:39.484778   39225 out.go:201] 
	W1001 19:35:39.485745   39225 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1001 19:35:39.485763   39225 out.go:270] * 
	* 
	W1001 19:35:39.487889   39225 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 19:35:39.489143   39225 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-193737 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-193737 status -v=7 --alsologtostderr: (18.9620145s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-193737 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-193737 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-193737 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-193737 -n ha-193737
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-193737 logs -n 25: (1.61711253s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-193737 ssh -n ha-193737-m02 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m03_ha-193737-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m03:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04:/home/docker/cp-test_ha-193737-m03_ha-193737-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737-m04 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m03_ha-193737-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-193737 cp testdata/cp-test.txt                                                | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1122815483/001/cp-test_ha-193737-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737:/home/docker/cp-test_ha-193737-m04_ha-193737.txt                       |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737 sudo cat                                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m04_ha-193737.txt                                 |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m02:/home/docker/cp-test_ha-193737-m04_ha-193737-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737-m02 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m04_ha-193737-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m03:/home/docker/cp-test_ha-193737-m04_ha-193737-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n                                                                 | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | ha-193737-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-193737 ssh -n ha-193737-m03 sudo cat                                          | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC | 01 Oct 24 19:24 UTC |
	|         | /home/docker/cp-test_ha-193737-m04_ha-193737-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-193737 node stop m02 -v=7                                                     | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-193737 node start m02 -v=7                                                    | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-193737 -v=7                                                           | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:27 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-193737 -v=7                                                                | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:27 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-193737 --wait=true -v=7                                                    | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:29 UTC | 01 Oct 24 19:33 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-193737                                                                | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:33 UTC |                     |
	| node    | ha-193737 node delete m03 -v=7                                                   | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:33 UTC | 01 Oct 24 19:33 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-193737 stop -v=7                                                              | ha-193737 | jenkins | v1.34.0 | 01 Oct 24 19:33 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 19:29:08
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 19:29:08.939916   37328 out.go:345] Setting OutFile to fd 1 ...
	I1001 19:29:08.940061   37328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:29:08.940070   37328 out.go:358] Setting ErrFile to fd 2...
	I1001 19:29:08.940075   37328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:29:08.940255   37328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 19:29:08.940925   37328 out.go:352] Setting JSON to false
	I1001 19:29:08.941970   37328 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4291,"bootTime":1727806658,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 19:29:08.942092   37328 start.go:139] virtualization: kvm guest
	I1001 19:29:08.944107   37328 out.go:177] * [ha-193737] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 19:29:08.945224   37328 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 19:29:08.945250   37328 notify.go:220] Checking for updates...
	I1001 19:29:08.947677   37328 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 19:29:08.948984   37328 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:29:08.950121   37328 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:29:08.951135   37328 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 19:29:08.952383   37328 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 19:29:08.954030   37328 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:29:08.954158   37328 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 19:29:08.954851   37328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:29:08.954930   37328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:29:08.972315   37328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39757
	I1001 19:29:08.972863   37328 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:29:08.973399   37328 main.go:141] libmachine: Using API Version  1
	I1001 19:29:08.973418   37328 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:29:08.973808   37328 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:29:08.974026   37328 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:29:09.016232   37328 out.go:177] * Using the kvm2 driver based on existing profile
	I1001 19:29:09.017238   37328 start.go:297] selected driver: kvm2
	I1001 19:29:09.017255   37328 start.go:901] validating driver "kvm2" against &{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.152 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:29:09.017399   37328 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 19:29:09.017748   37328 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 19:29:09.017862   37328 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 19:29:09.034140   37328 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 19:29:09.035179   37328 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 19:29:09.035220   37328 cni.go:84] Creating CNI manager for ""
	I1001 19:29:09.035272   37328 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1001 19:29:09.035346   37328 start.go:340] cluster config:
	{Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.3
9.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.152 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:
false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:29:09.035492   37328 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 19:29:09.037143   37328 out.go:177] * Starting "ha-193737" primary control-plane node in "ha-193737" cluster
	I1001 19:29:09.038208   37328 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:29:09.038259   37328 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 19:29:09.038271   37328 cache.go:56] Caching tarball of preloaded images
	I1001 19:29:09.038387   37328 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 19:29:09.038403   37328 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 19:29:09.038571   37328 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/config.json ...
	I1001 19:29:09.038832   37328 start.go:360] acquireMachinesLock for ha-193737: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 19:29:09.038876   37328 start.go:364] duration metric: took 24.118µs to acquireMachinesLock for "ha-193737"
	I1001 19:29:09.038889   37328 start.go:96] Skipping create...Using existing machine configuration
	I1001 19:29:09.038894   37328 fix.go:54] fixHost starting: 
	I1001 19:29:09.039166   37328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:29:09.039202   37328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:29:09.054402   37328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38879
	I1001 19:29:09.054892   37328 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:29:09.055382   37328 main.go:141] libmachine: Using API Version  1
	I1001 19:29:09.055403   37328 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:29:09.055772   37328 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:29:09.055973   37328 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:29:09.056124   37328 main.go:141] libmachine: (ha-193737) Calling .GetState
	I1001 19:29:09.057794   37328 fix.go:112] recreateIfNeeded on ha-193737: state=Running err=<nil>
	W1001 19:29:09.057829   37328 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 19:29:09.059519   37328 out.go:177] * Updating the running kvm2 "ha-193737" VM ...
	I1001 19:29:09.060793   37328 machine.go:93] provisionDockerMachine start ...
	I1001 19:29:09.060817   37328 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:29:09.061040   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:29:09.063725   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.064214   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:29:09.064240   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.064406   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:29:09.064594   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:29:09.064743   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:29:09.064855   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:29:09.065011   37328 main.go:141] libmachine: Using SSH client type: native
	I1001 19:29:09.065203   37328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:29:09.065215   37328 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 19:29:09.177577   37328 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-193737
	
	I1001 19:29:09.177611   37328 main.go:141] libmachine: (ha-193737) Calling .GetMachineName
	I1001 19:29:09.177843   37328 buildroot.go:166] provisioning hostname "ha-193737"
	I1001 19:29:09.177912   37328 main.go:141] libmachine: (ha-193737) Calling .GetMachineName
	I1001 19:29:09.178172   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:29:09.181484   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.181951   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:29:09.181971   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.182120   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:29:09.182311   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:29:09.182437   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:29:09.182548   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:29:09.182728   37328 main.go:141] libmachine: Using SSH client type: native
	I1001 19:29:09.182945   37328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:29:09.182966   37328 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-193737 && echo "ha-193737" | sudo tee /etc/hostname
	I1001 19:29:09.305362   37328 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-193737
	
	I1001 19:29:09.305390   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:29:09.308770   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.309176   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:29:09.309201   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.309443   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:29:09.309651   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:29:09.309888   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:29:09.310094   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:29:09.310355   37328 main.go:141] libmachine: Using SSH client type: native
	I1001 19:29:09.310549   37328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:29:09.310572   37328 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-193737' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-193737/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-193737' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 19:29:09.417404   37328 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:29:09.417436   37328 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 19:29:09.417481   37328 buildroot.go:174] setting up certificates
	I1001 19:29:09.417503   37328 provision.go:84] configureAuth start
	I1001 19:29:09.417518   37328 main.go:141] libmachine: (ha-193737) Calling .GetMachineName
	I1001 19:29:09.417786   37328 main.go:141] libmachine: (ha-193737) Calling .GetIP
	I1001 19:29:09.420372   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.420836   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:29:09.420865   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.421099   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:29:09.423481   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.423848   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:29:09.423884   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.424042   37328 provision.go:143] copyHostCerts
	I1001 19:29:09.424072   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:29:09.424128   37328 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 19:29:09.424137   37328 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:29:09.424205   37328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 19:29:09.424290   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:29:09.424307   37328 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 19:29:09.424320   37328 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:29:09.424346   37328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 19:29:09.424431   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:29:09.424451   37328 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 19:29:09.424455   37328 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:29:09.424492   37328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 19:29:09.424554   37328 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.ha-193737 san=[127.0.0.1 192.168.39.14 ha-193737 localhost minikube]
	I1001 19:29:09.534187   37328 provision.go:177] copyRemoteCerts
	I1001 19:29:09.534239   37328 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 19:29:09.534260   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:29:09.537352   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.537737   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:29:09.537765   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.537981   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:29:09.538152   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:29:09.538302   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:29:09.538393   37328 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:29:09.619235   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 19:29:09.619333   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1001 19:29:09.645348   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 19:29:09.645438   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 19:29:09.673071   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 19:29:09.673151   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 19:29:09.704248   37328 provision.go:87] duration metric: took 286.730847ms to configureAuth
	I1001 19:29:09.704279   37328 buildroot.go:189] setting minikube options for container-runtime
	I1001 19:29:09.704615   37328 config.go:182] Loaded profile config "ha-193737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:29:09.704693   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:29:09.707374   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.707795   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:29:09.707823   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:29:09.708006   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:29:09.708215   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:29:09.708350   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:29:09.708482   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:29:09.708621   37328 main.go:141] libmachine: Using SSH client type: native
	I1001 19:29:09.708823   37328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:29:09.708847   37328 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 19:30:40.599375   37328 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 19:30:40.599407   37328 machine.go:96] duration metric: took 1m31.538596323s to provisionDockerMachine
	I1001 19:30:40.599423   37328 start.go:293] postStartSetup for "ha-193737" (driver="kvm2")
	I1001 19:30:40.599437   37328 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 19:30:40.599486   37328 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:30:40.599815   37328 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 19:30:40.599849   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:30:40.603054   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.603452   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:30:40.603476   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.603668   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:30:40.603834   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:30:40.604021   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:30:40.604162   37328 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:30:40.687847   37328 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 19:30:40.692107   37328 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 19:30:40.692146   37328 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 19:30:40.692208   37328 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 19:30:40.692279   37328 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 19:30:40.692289   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /etc/ssl/certs/184302.pem
	I1001 19:30:40.692420   37328 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 19:30:40.701750   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:30:40.725539   37328 start.go:296] duration metric: took 126.10159ms for postStartSetup
	I1001 19:30:40.725576   37328 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:30:40.725867   37328 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1001 19:30:40.725892   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:30:40.728740   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.729170   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:30:40.729197   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.729648   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:30:40.730783   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:30:40.731004   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:30:40.731156   37328 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	W1001 19:30:40.814694   37328 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1001 19:30:40.814729   37328 fix.go:56] duration metric: took 1m31.775834652s for fixHost
	I1001 19:30:40.814757   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:30:40.817578   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.818056   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:30:40.818091   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.818248   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:30:40.818449   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:30:40.818604   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:30:40.818723   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:30:40.818870   37328 main.go:141] libmachine: Using SSH client type: native
	I1001 19:30:40.819096   37328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I1001 19:30:40.819109   37328 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 19:30:40.921284   37328 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727811040.894124143
	
	I1001 19:30:40.921304   37328 fix.go:216] guest clock: 1727811040.894124143
	I1001 19:30:40.921312   37328 fix.go:229] Guest: 2024-10-01 19:30:40.894124143 +0000 UTC Remote: 2024-10-01 19:30:40.81474032 +0000 UTC m=+91.911975595 (delta=79.383823ms)
	I1001 19:30:40.921331   37328 fix.go:200] guest clock delta is within tolerance: 79.383823ms
	I1001 19:30:40.921336   37328 start.go:83] releasing machines lock for "ha-193737", held for 1m31.882452335s
	I1001 19:30:40.921356   37328 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:30:40.921608   37328 main.go:141] libmachine: (ha-193737) Calling .GetIP
	I1001 19:30:40.924593   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.925006   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:30:40.925027   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.925218   37328 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:30:40.925706   37328 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:30:40.925881   37328 main.go:141] libmachine: (ha-193737) Calling .DriverName
	I1001 19:30:40.925992   37328 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 19:30:40.926028   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:30:40.926102   37328 ssh_runner.go:195] Run: cat /version.json
	I1001 19:30:40.926126   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHHostname
	I1001 19:30:40.928744   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.928801   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.929178   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:30:40.929206   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.929233   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:30:40.929247   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:40.929373   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:30:40.929501   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHPort
	I1001 19:30:40.929578   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:30:40.929650   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHKeyPath
	I1001 19:30:40.929722   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:30:40.929787   37328 main.go:141] libmachine: (ha-193737) Calling .GetSSHUsername
	I1001 19:30:40.929824   37328 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:30:40.929894   37328 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/ha-193737/id_rsa Username:docker}
	I1001 19:30:41.045553   37328 ssh_runner.go:195] Run: systemctl --version
	I1001 19:30:41.051502   37328 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 19:30:41.219086   37328 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 19:30:41.225476   37328 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 19:30:41.225565   37328 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 19:30:41.236020   37328 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1001 19:30:41.236050   37328 start.go:495] detecting cgroup driver to use...
	I1001 19:30:41.236122   37328 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 19:30:41.253549   37328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 19:30:41.269349   37328 docker.go:217] disabling cri-docker service (if available) ...
	I1001 19:30:41.269421   37328 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 19:30:41.284876   37328 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 19:30:41.299341   37328 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 19:30:41.453531   37328 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 19:30:41.598069   37328 docker.go:233] disabling docker service ...
	I1001 19:30:41.598135   37328 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 19:30:41.615329   37328 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 19:30:41.628733   37328 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 19:30:41.776481   37328 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 19:30:41.934366   37328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 19:30:41.947596   37328 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 19:30:41.966515   37328 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 19:30:41.966592   37328 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:30:41.977069   37328 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 19:30:41.977135   37328 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:30:41.987034   37328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:30:41.997115   37328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:30:42.007263   37328 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 19:30:42.017806   37328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:30:42.028946   37328 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:30:42.040035   37328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:30:42.050185   37328 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 19:30:42.059298   37328 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 19:30:42.068579   37328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:30:42.226919   37328 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 19:30:42.463910   37328 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 19:30:42.463995   37328 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 19:30:42.469021   37328 start.go:563] Will wait 60s for crictl version
	I1001 19:30:42.469086   37328 ssh_runner.go:195] Run: which crictl
	I1001 19:30:42.472762   37328 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 19:30:42.511526   37328 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 19:30:42.511607   37328 ssh_runner.go:195] Run: crio --version
	I1001 19:30:42.540609   37328 ssh_runner.go:195] Run: crio --version
	I1001 19:30:42.571459   37328 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 19:30:42.572552   37328 main.go:141] libmachine: (ha-193737) Calling .GetIP
	I1001 19:30:42.575271   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:42.575645   37328 main.go:141] libmachine: (ha-193737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2b:09", ip: ""} in network mk-ha-193737: {Iface:virbr1 ExpiryTime:2024-10-01 20:20:01 +0000 UTC Type:0 Mac:52:54:00:80:2b:09 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-193737 Clientid:01:52:54:00:80:2b:09}
	I1001 19:30:42.575669   37328 main.go:141] libmachine: (ha-193737) DBG | domain ha-193737 has defined IP address 192.168.39.14 and MAC address 52:54:00:80:2b:09 in network mk-ha-193737
	I1001 19:30:42.575882   37328 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 19:30:42.580521   37328 kubeadm.go:883] updating cluster {Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.152 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 19:30:42.580640   37328 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:30:42.580679   37328 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 19:30:42.623368   37328 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 19:30:42.623391   37328 crio.go:433] Images already preloaded, skipping extraction
	I1001 19:30:42.623440   37328 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 19:30:42.659185   37328 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 19:30:42.659208   37328 cache_images.go:84] Images are preloaded, skipping loading
	I1001 19:30:42.659226   37328 kubeadm.go:934] updating node { 192.168.39.14 8443 v1.31.1 crio true true} ...
	I1001 19:30:42.659340   37328 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-193737 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 19:30:42.659416   37328 ssh_runner.go:195] Run: crio config
	I1001 19:30:42.706099   37328 cni.go:84] Creating CNI manager for ""
	I1001 19:30:42.706123   37328 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1001 19:30:42.706133   37328 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 19:30:42.706154   37328 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.14 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-193737 NodeName:ha-193737 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 19:30:42.706281   37328 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-193737"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 19:30:42.706301   37328 kube-vip.go:115] generating kube-vip config ...
	I1001 19:30:42.706336   37328 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 19:30:42.718095   37328 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 19:30:42.718208   37328 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1001 19:30:42.718272   37328 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 19:30:42.728080   37328 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 19:30:42.728148   37328 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1001 19:30:42.737386   37328 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1001 19:30:42.754003   37328 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 19:30:42.770286   37328 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1001 19:30:42.786791   37328 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1001 19:30:42.803229   37328 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 19:30:42.808100   37328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:30:42.957282   37328 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:30:42.971531   37328 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737 for IP: 192.168.39.14
	I1001 19:30:42.971555   37328 certs.go:194] generating shared ca certs ...
	I1001 19:30:42.971576   37328 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:30:42.971738   37328 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 19:30:42.971793   37328 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 19:30:42.971807   37328 certs.go:256] generating profile certs ...
	I1001 19:30:42.971890   37328 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/client.key
	I1001 19:30:42.971924   37328 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.058751ee
	I1001 19:30:42.971954   37328 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.058751ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.14 192.168.39.27 192.168.39.101 192.168.39.254]
	I1001 19:30:43.156442   37328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.058751ee ...
	I1001 19:30:43.156481   37328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.058751ee: {Name:mk398f3bf2de18eb9255f2abe557f9ee8d4c74e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:30:43.156690   37328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.058751ee ...
	I1001 19:30:43.156707   37328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.058751ee: {Name:mk011ee79e6c6902067af04844ffcc7247fec588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:30:43.156812   37328 certs.go:381] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt.058751ee -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt
	I1001 19:30:43.156997   37328 certs.go:385] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key.058751ee -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key
	I1001 19:30:43.157157   37328 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key
	I1001 19:30:43.157175   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 19:30:43.157195   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 19:30:43.157212   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 19:30:43.157233   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 19:30:43.157252   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 19:30:43.157271   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 19:30:43.157294   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 19:30:43.157312   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 19:30:43.157373   37328 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 19:30:43.157414   37328 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 19:30:43.157428   37328 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 19:30:43.157469   37328 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 19:30:43.157500   37328 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 19:30:43.157531   37328 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 19:30:43.157588   37328 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:30:43.157626   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /usr/share/ca-certificates/184302.pem
	I1001 19:30:43.157652   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:30:43.157670   37328 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem -> /usr/share/ca-certificates/18430.pem
	I1001 19:30:43.158203   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 19:30:43.182962   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 19:30:43.209517   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 19:30:43.235236   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 19:30:43.259478   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1001 19:30:43.283750   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 19:30:43.307938   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 19:30:43.332315   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/ha-193737/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 19:30:43.356259   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 19:30:43.379982   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 19:30:43.403792   37328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 19:30:43.427640   37328 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 19:30:43.444144   37328 ssh_runner.go:195] Run: openssl version
	I1001 19:30:43.450283   37328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 19:30:43.461547   37328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 19:30:43.465989   37328 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 19:30:43.466049   37328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 19:30:43.471519   37328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 19:30:43.480515   37328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 19:30:43.490784   37328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 19:30:43.494982   37328 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 19:30:43.495021   37328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 19:30:43.500231   37328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 19:30:43.509685   37328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 19:30:43.520238   37328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:30:43.524704   37328 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:30:43.524758   37328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:30:43.530194   37328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 19:30:43.539135   37328 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 19:30:43.543573   37328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 19:30:43.549061   37328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 19:30:43.554388   37328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 19:30:43.559805   37328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 19:30:43.565256   37328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 19:30:43.570513   37328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 19:30:43.575858   37328 kubeadm.go:392] StartCluster: {Name:ha-193737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-193737 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.152 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:30:43.575964   37328 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 19:30:43.576003   37328 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 19:30:43.615896   37328 cri.go:89] found id: "55dcc0edc52a1be3b0b34b8c6d6bb9b7f606b5f9038d86694ef0cb9f8c2783a0"
	I1001 19:30:43.615918   37328 cri.go:89] found id: "01fbb357fab4f3446ed3564800db9f3d7f8ffa47c32db7026219630bec07a664"
	I1001 19:30:43.615922   37328 cri.go:89] found id: "ba9298ce250b67db2ca42f0c725e3969cbe562dea70767c2a9f85e8814364c27"
	I1001 19:30:43.615925   37328 cri.go:89] found id: "75485355206ed8610939d128b62d0d55ec84cbb8615f7261469f854e52b6447d"
	I1001 19:30:43.615928   37328 cri.go:89] found id: "b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3"
	I1001 19:30:43.615931   37328 cri.go:89] found id: "c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a"
	I1001 19:30:43.615933   37328 cri.go:89] found id: "25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525"
	I1001 19:30:43.615935   37328 cri.go:89] found id: "6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c"
	I1001 19:30:43.615939   37328 cri.go:89] found id: "c962c4138a0019c76f981b72e8948efd386403002bb686f7e70c38bc20c3d542"
	I1001 19:30:43.615943   37328 cri.go:89] found id: "7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e"
	I1001 19:30:43.615946   37328 cri.go:89] found id: "d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7"
	I1001 19:30:43.615951   37328 cri.go:89] found id: "d2c57920320eb65f4477cbff8aec818a7ecddb3461a77ca111172e4d1a5f7e71"
	I1001 19:30:43.615954   37328 cri.go:89] found id: "fc9d05172b801a89ec36c0d0d549bfeee1aafb67f6586e3f4f163cb63f01d062"
	I1001 19:30:43.615956   37328 cri.go:89] found id: ""
	I1001 19:30:43.615993   37328 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.056814281Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811359056683954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed0e7b25-fe89-4f2e-bd6a-1248fc5c7634 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.057280830Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84cb8d94-7d6f-4066-82f5-eb37d90e0efa name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.057351578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84cb8d94-7d6f-4066-82f5-eb37d90e0efa name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.057909638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42d2996fa57056f337846a0c663c666896bc5623403716bf936f95a745c26751,PodSandboxId:d22b13768ce874522ceed8cc3440fb78e51adfcc55c5d48b10e0a1ba76f2fa79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727811144032445382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc73e66125bdf484da9d957113d2dbcd22b1cca191ae53bdd53cddf4df26a9b4,PodSandboxId:f0f8610a34814bf5843b3ba4d4c2b837552c02efdd35a53e341c9b8afa41b367,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727811095028393362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe0e73911c1f7037af24faf34585ea0f2dd2050508c66f82fceb1d3f63350357,PodSandboxId:d22b13768ce874522ceed8cc3440fb78e51adfcc55c5d48b10e0a1ba76f2fa79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727811095031826420,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0355d034cef455e56a215145c9058bc9694f5da8c3c4c2172ae416d5af558add,PodSandboxId:9ddcc19db3580843861d67b547a77d3a8d429b55c0a3c2ee3bba3ad96f6c726b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727811091033631042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da37e56f1a046294a51f258d6619d188b789c2740337a7586e481feaaca27edc,PodSandboxId:fca6d99cf42a83642203449fa4687750128541f4acf4a54e0e9f868f2262e0e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727811082317090540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:198dab162e8aa5e19a932dc97135bef548d2a2744de22fb4fa9898746a7a9788,PodSandboxId:1bd6f04d79ddbb801bfd04b223d0aa786f5e5f02458dde8c322d43b2862332ad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727811066130351937,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813365ea1e3446cbdf9a69d3a73954fd,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e64eec86b7056617efbd7c498622e3f5958857bac4d73be6899dbf8db5c89cf9,PodSandboxId:9133de98e8a424f31f1f22b1bf4d2d17ac28543288eb487a6118187c62434bd9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727811049246022559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c0e82b0f6c09d87ec13c643737a4ccf5b340e9502946d56bbb217eb96dbe93,PodSandboxId:9c0014ccd9ccba46c670e7c0f6df4fb143de9e63e48a89015e6738bae92e835b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727811048808876184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61ff92cf26d6e3aed32887454ab7d2058de0b8a5e7ea7861f48b1d01a5939727,PodSandboxId:b73cc006e86acbbdb4fd391f82012e597b411d2e2a99955226e63c46f802968b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727811049155845041,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a718e9dc3c409631fa8f5dc4d076b18ea96e3aa4e1019102c18b202167818924,PodSandboxId:7eb78842f19b505167c5397f172a99bb1f7b17780e57a3592433042cee608db5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727811049104667513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-5017-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83950c035f12e98c6757b6223f78b7f5a39d863ec5c60eac7c00e820c1c5c076,PodSandboxId:da377024b36877f6c3e94272b41630ae1d13493ca8d22c13b39a58463542dba5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727811048970844751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3382226e00b6ef4a63086f6faaee763c7a138978d0ef813c494eb8ffc1d02c5f,PodSandboxId:f0f8610a34814bf5843b3ba4d4c2b837552c02efdd35a53e341c9b8afa41b367,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727811048981094425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95bc5dbd279eda6d388ec30e614a300b3e7377edf477e235fd68a408b5928575,PodSandboxId:9ddcc19db3580843861d67b547a77d3a8d429b55c0a3c2ee3bba3ad96f6c726b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727811048887857582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d10a89edf2195041bf7b272302c3c39e01a5af56118e22a81c8e75031db83b8b,PodSandboxId:7761291d35db572afa54b59123e615dd81d37ee1fcc9ebe5e1715dd51dcac7c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727811048827632586,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Ann
otations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727810590371392470,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727810449363564194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727810449354599709,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-5017-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727810437214011626,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727810437061941516,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727810424745133125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727810424759398087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84cb8d94-7d6f-4066-82f5-eb37d90e0efa name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.100215190Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0b26bf13-ecb5-4040-a045-b11cc9211359 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.100289050Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0b26bf13-ecb5-4040-a045-b11cc9211359 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.101661622Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=17459525-97ad-4da0-94c1-46030a0cccfd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.102408753Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811359102381133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17459525-97ad-4da0-94c1-46030a0cccfd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.103003575Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d910debc-4de1-4576-aaca-b6c487b63abe name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.103076292Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d910debc-4de1-4576-aaca-b6c487b63abe name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.103495330Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42d2996fa57056f337846a0c663c666896bc5623403716bf936f95a745c26751,PodSandboxId:d22b13768ce874522ceed8cc3440fb78e51adfcc55c5d48b10e0a1ba76f2fa79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727811144032445382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc73e66125bdf484da9d957113d2dbcd22b1cca191ae53bdd53cddf4df26a9b4,PodSandboxId:f0f8610a34814bf5843b3ba4d4c2b837552c02efdd35a53e341c9b8afa41b367,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727811095028393362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe0e73911c1f7037af24faf34585ea0f2dd2050508c66f82fceb1d3f63350357,PodSandboxId:d22b13768ce874522ceed8cc3440fb78e51adfcc55c5d48b10e0a1ba76f2fa79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727811095031826420,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0355d034cef455e56a215145c9058bc9694f5da8c3c4c2172ae416d5af558add,PodSandboxId:9ddcc19db3580843861d67b547a77d3a8d429b55c0a3c2ee3bba3ad96f6c726b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727811091033631042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da37e56f1a046294a51f258d6619d188b789c2740337a7586e481feaaca27edc,PodSandboxId:fca6d99cf42a83642203449fa4687750128541f4acf4a54e0e9f868f2262e0e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727811082317090540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:198dab162e8aa5e19a932dc97135bef548d2a2744de22fb4fa9898746a7a9788,PodSandboxId:1bd6f04d79ddbb801bfd04b223d0aa786f5e5f02458dde8c322d43b2862332ad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727811066130351937,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813365ea1e3446cbdf9a69d3a73954fd,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e64eec86b7056617efbd7c498622e3f5958857bac4d73be6899dbf8db5c89cf9,PodSandboxId:9133de98e8a424f31f1f22b1bf4d2d17ac28543288eb487a6118187c62434bd9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727811049246022559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c0e82b0f6c09d87ec13c643737a4ccf5b340e9502946d56bbb217eb96dbe93,PodSandboxId:9c0014ccd9ccba46c670e7c0f6df4fb143de9e63e48a89015e6738bae92e835b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727811048808876184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61ff92cf26d6e3aed32887454ab7d2058de0b8a5e7ea7861f48b1d01a5939727,PodSandboxId:b73cc006e86acbbdb4fd391f82012e597b411d2e2a99955226e63c46f802968b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727811049155845041,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a718e9dc3c409631fa8f5dc4d076b18ea96e3aa4e1019102c18b202167818924,PodSandboxId:7eb78842f19b505167c5397f172a99bb1f7b17780e57a3592433042cee608db5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727811049104667513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-5017-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83950c035f12e98c6757b6223f78b7f5a39d863ec5c60eac7c00e820c1c5c076,PodSandboxId:da377024b36877f6c3e94272b41630ae1d13493ca8d22c13b39a58463542dba5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727811048970844751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3382226e00b6ef4a63086f6faaee763c7a138978d0ef813c494eb8ffc1d02c5f,PodSandboxId:f0f8610a34814bf5843b3ba4d4c2b837552c02efdd35a53e341c9b8afa41b367,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727811048981094425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95bc5dbd279eda6d388ec30e614a300b3e7377edf477e235fd68a408b5928575,PodSandboxId:9ddcc19db3580843861d67b547a77d3a8d429b55c0a3c2ee3bba3ad96f6c726b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727811048887857582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d10a89edf2195041bf7b272302c3c39e01a5af56118e22a81c8e75031db83b8b,PodSandboxId:7761291d35db572afa54b59123e615dd81d37ee1fcc9ebe5e1715dd51dcac7c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727811048827632586,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Ann
otations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727810590371392470,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727810449363564194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727810449354599709,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-5017-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727810437214011626,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727810437061941516,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727810424745133125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727810424759398087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d910debc-4de1-4576-aaca-b6c487b63abe name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.149384343Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f87f607-d0a7-43f3-b500-308ac6bab5b7 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.149459444Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f87f607-d0a7-43f3-b500-308ac6bab5b7 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.150843423Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=deb21fcf-db8e-43ef-a957-aa7a62c57101 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.151244695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811359151223486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=deb21fcf-db8e-43ef-a957-aa7a62c57101 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.151896594Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80acf70c-3579-47b6-a1c4-6393dca666ef name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.151955126Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80acf70c-3579-47b6-a1c4-6393dca666ef name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.152402082Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42d2996fa57056f337846a0c663c666896bc5623403716bf936f95a745c26751,PodSandboxId:d22b13768ce874522ceed8cc3440fb78e51adfcc55c5d48b10e0a1ba76f2fa79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727811144032445382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc73e66125bdf484da9d957113d2dbcd22b1cca191ae53bdd53cddf4df26a9b4,PodSandboxId:f0f8610a34814bf5843b3ba4d4c2b837552c02efdd35a53e341c9b8afa41b367,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727811095028393362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe0e73911c1f7037af24faf34585ea0f2dd2050508c66f82fceb1d3f63350357,PodSandboxId:d22b13768ce874522ceed8cc3440fb78e51adfcc55c5d48b10e0a1ba76f2fa79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727811095031826420,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0355d034cef455e56a215145c9058bc9694f5da8c3c4c2172ae416d5af558add,PodSandboxId:9ddcc19db3580843861d67b547a77d3a8d429b55c0a3c2ee3bba3ad96f6c726b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727811091033631042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da37e56f1a046294a51f258d6619d188b789c2740337a7586e481feaaca27edc,PodSandboxId:fca6d99cf42a83642203449fa4687750128541f4acf4a54e0e9f868f2262e0e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727811082317090540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:198dab162e8aa5e19a932dc97135bef548d2a2744de22fb4fa9898746a7a9788,PodSandboxId:1bd6f04d79ddbb801bfd04b223d0aa786f5e5f02458dde8c322d43b2862332ad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727811066130351937,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813365ea1e3446cbdf9a69d3a73954fd,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e64eec86b7056617efbd7c498622e3f5958857bac4d73be6899dbf8db5c89cf9,PodSandboxId:9133de98e8a424f31f1f22b1bf4d2d17ac28543288eb487a6118187c62434bd9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727811049246022559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c0e82b0f6c09d87ec13c643737a4ccf5b340e9502946d56bbb217eb96dbe93,PodSandboxId:9c0014ccd9ccba46c670e7c0f6df4fb143de9e63e48a89015e6738bae92e835b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727811048808876184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61ff92cf26d6e3aed32887454ab7d2058de0b8a5e7ea7861f48b1d01a5939727,PodSandboxId:b73cc006e86acbbdb4fd391f82012e597b411d2e2a99955226e63c46f802968b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727811049155845041,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a718e9dc3c409631fa8f5dc4d076b18ea96e3aa4e1019102c18b202167818924,PodSandboxId:7eb78842f19b505167c5397f172a99bb1f7b17780e57a3592433042cee608db5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727811049104667513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-5017-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83950c035f12e98c6757b6223f78b7f5a39d863ec5c60eac7c00e820c1c5c076,PodSandboxId:da377024b36877f6c3e94272b41630ae1d13493ca8d22c13b39a58463542dba5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727811048970844751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3382226e00b6ef4a63086f6faaee763c7a138978d0ef813c494eb8ffc1d02c5f,PodSandboxId:f0f8610a34814bf5843b3ba4d4c2b837552c02efdd35a53e341c9b8afa41b367,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727811048981094425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95bc5dbd279eda6d388ec30e614a300b3e7377edf477e235fd68a408b5928575,PodSandboxId:9ddcc19db3580843861d67b547a77d3a8d429b55c0a3c2ee3bba3ad96f6c726b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727811048887857582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d10a89edf2195041bf7b272302c3c39e01a5af56118e22a81c8e75031db83b8b,PodSandboxId:7761291d35db572afa54b59123e615dd81d37ee1fcc9ebe5e1715dd51dcac7c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727811048827632586,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Ann
otations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727810590371392470,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727810449363564194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727810449354599709,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-5017-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727810437214011626,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727810437061941516,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727810424745133125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727810424759398087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80acf70c-3579-47b6-a1c4-6393dca666ef name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.203417730Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2e617562-348f-4da5-a638-9d0741b6e7ad name=/runtime.v1.RuntimeService/Version
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.203543342Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2e617562-348f-4da5-a638-9d0741b6e7ad name=/runtime.v1.RuntimeService/Version
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.204788194Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7eb0d5d2-7923-42ee-af45-48b2c7406010 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.205652560Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811359205624042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7eb0d5d2-7923-42ee-af45-48b2c7406010 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.207424822Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94255255-0dd6-44df-88f3-a7a180a279b4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.207538542Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94255255-0dd6-44df-88f3-a7a180a279b4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:35:59 ha-193737 crio[3761]: time="2024-10-01 19:35:59.208244494Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42d2996fa57056f337846a0c663c666896bc5623403716bf936f95a745c26751,PodSandboxId:d22b13768ce874522ceed8cc3440fb78e51adfcc55c5d48b10e0a1ba76f2fa79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727811144032445382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc73e66125bdf484da9d957113d2dbcd22b1cca191ae53bdd53cddf4df26a9b4,PodSandboxId:f0f8610a34814bf5843b3ba4d4c2b837552c02efdd35a53e341c9b8afa41b367,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727811095028393362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe0e73911c1f7037af24faf34585ea0f2dd2050508c66f82fceb1d3f63350357,PodSandboxId:d22b13768ce874522ceed8cc3440fb78e51adfcc55c5d48b10e0a1ba76f2fa79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727811095031826420,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b587a6-418b-47e5-9bf7-3fb6fa5e3372,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0355d034cef455e56a215145c9058bc9694f5da8c3c4c2172ae416d5af558add,PodSandboxId:9ddcc19db3580843861d67b547a77d3a8d429b55c0a3c2ee3bba3ad96f6c726b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727811091033631042,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da37e56f1a046294a51f258d6619d188b789c2740337a7586e481feaaca27edc,PodSandboxId:fca6d99cf42a83642203449fa4687750128541f4acf4a54e0e9f868f2262e0e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727811082317090540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:198dab162e8aa5e19a932dc97135bef548d2a2744de22fb4fa9898746a7a9788,PodSandboxId:1bd6f04d79ddbb801bfd04b223d0aa786f5e5f02458dde8c322d43b2862332ad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1727811066130351937,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813365ea1e3446cbdf9a69d3a73954fd,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e64eec86b7056617efbd7c498622e3f5958857bac4d73be6899dbf8db5c89cf9,PodSandboxId:9133de98e8a424f31f1f22b1bf4d2d17ac28543288eb487a6118187c62434bd9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727811049246022559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c0e82b0f6c09d87ec13c643737a4ccf5b340e9502946d56bbb217eb96dbe93,PodSandboxId:9c0014ccd9ccba46c670e7c0f6df4fb143de9e63e48a89015e6738bae92e835b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727811048808876184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61ff92cf26d6e3aed32887454ab7d2058de0b8a5e7ea7861f48b1d01a5939727,PodSandboxId:b73cc006e86acbbdb4fd391f82012e597b411d2e2a99955226e63c46f802968b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727811049155845041,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a718e9dc3c409631fa8f5dc4d076b18ea96e3aa4e1019102c18b202167818924,PodSandboxId:7eb78842f19b505167c5397f172a99bb1f7b17780e57a3592433042cee608db5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727811049104667513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-5017-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83950c035f12e98c6757b6223f78b7f5a39d863ec5c60eac7c00e820c1c5c076,PodSandboxId:da377024b36877f6c3e94272b41630ae1d13493ca8d22c13b39a58463542dba5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727811048970844751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3382226e00b6ef4a63086f6faaee763c7a138978d0ef813c494eb8ffc1d02c5f,PodSandboxId:f0f8610a34814bf5843b3ba4d4c2b837552c02efdd35a53e341c9b8afa41b367,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727811048981094425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 26cd510d04d444e2a3fd26699f0dbb26,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95bc5dbd279eda6d388ec30e614a300b3e7377edf477e235fd68a408b5928575,PodSandboxId:9ddcc19db3580843861d67b547a77d3a8d429b55c0a3c2ee3bba3ad96f6c726b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727811048887857582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: de600bfbca1d9c3f01fa833eb2f872cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d10a89edf2195041bf7b272302c3c39e01a5af56118e22a81c8e75031db83b8b,PodSandboxId:7761291d35db572afa54b59123e615dd81d37ee1fcc9ebe5e1715dd51dcac7c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727811048827632586,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Ann
otations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d523f1298c385184a9e7db15e8734998757e06fff0e55aa1844ab59ccf00f07e,PodSandboxId:8ddf36dc2effd5b387a7cb5392afc6a34d689dee2d0047e27480f81871db6f74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727810590371392470,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rbjkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba3ecbe1-fb88-4674-b679-a442b28cd68e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3,PodSandboxId:b4ab4980fd9c6cd234ac1f28d252c896bc77b9f7b271045be59cae0784f96217,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727810449363564194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hd5hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f0afff-5571-46d6-888f-8982c71ba191,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a,PodSandboxId:69e4ceb6e3399a835dcd125c01919d9053abee035e3bf2b2595377489dc56cb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727810449354599709,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2wsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e3dd318-5017-4ada-bf2f-61b640ee2030,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525,PodSandboxId:f7fcfb918d1fd69f11a77764747f9408ce5ec85d4f114f22269ab22622168240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727810437214011626,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wnr6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89e11419-0c5c-486e-bdbf-eaf6fab1e62c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c,PodSandboxId:65474abfbeabf483f4a28a3e3229b175f3f41ffad78ea5d1303d49d9419d3ec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727810437061941516,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpsll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18fec3c-2880-4860-b220-a44d5e523bed,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7,PodSandboxId:4873897c8ffd7c74e4093155af6f4743491c9dcea652c8fba96e6904de277652,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727810424745133125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0322ee97040a2f569785dff412cf907f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e,PodSandboxId:c74bc4df7851a3393d5eca165d5e24dbdef7af644bc9844819bad425c8663aac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727810424759398087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-193737,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7769b1af58540331dfe5effd67e84a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94255255-0dd6-44df-88f3-a7a180a279b4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	42d2996fa5705       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   d22b13768ce87       storage-provisioner
	fe0e73911c1f7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   d22b13768ce87       storage-provisioner
	cc73e66125bdf       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            3                   f0f8610a34814       kube-apiserver-ha-193737
	0355d034cef45       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   2                   9ddcc19db3580       kube-controller-manager-ha-193737
	da37e56f1a046       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   fca6d99cf42a8       busybox-7dff88458-rbjkx
	198dab162e8aa       18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460                                      4 minutes ago       Running             kube-vip                  0                   1bd6f04d79ddb       kube-vip-ha-193737
	e64eec86b7056       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   9133de98e8a42       coredns-7c65d6cfc9-hd5hv
	61ff92cf26d6e       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   b73cc006e86ac       kindnet-wnr6g
	a718e9dc3c409       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   7eb78842f19b5       coredns-7c65d6cfc9-v2wsx
	3382226e00b6e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      5 minutes ago       Exited              kube-apiserver            2                   f0f8610a34814       kube-apiserver-ha-193737
	83950c035f12e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      5 minutes ago       Running             kube-scheduler            1                   da377024b3687       kube-scheduler-ha-193737
	95bc5dbd279ed       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      5 minutes ago       Exited              kube-controller-manager   1                   9ddcc19db3580       kube-controller-manager-ha-193737
	d10a89edf2195       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   7761291d35db5       etcd-ha-193737
	82c0e82b0f6c0       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      5 minutes ago       Running             kube-proxy                1                   9c0014ccd9ccb       kube-proxy-zpsll
	d523f1298c385       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   12 minutes ago      Exited              busybox                   0                   8ddf36dc2effd       busybox-7dff88458-rbjkx
	b9a32cfd9baec       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   b4ab4980fd9c6       coredns-7c65d6cfc9-hd5hv
	c598f8345f1d8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   69e4ceb6e3399       coredns-7c65d6cfc9-v2wsx
	25b91984e532b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      15 minutes ago      Exited              kindnet-cni               0                   f7fcfb918d1fd       kindnet-wnr6g
	6ce5a1ca06729       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      15 minutes ago      Exited              kube-proxy                0                   65474abfbeabf       kube-proxy-zpsll
	7092a3841df08       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      15 minutes ago      Exited              etcd                      0                   c74bc4df7851a       etcd-ha-193737
	d7d722793679c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      15 minutes ago      Exited              kube-scheduler            0                   4873897c8ffd7       kube-scheduler-ha-193737
	
	
	==> coredns [a718e9dc3c409631fa8f5dc4d076b18ea96e3aa4e1019102c18b202167818924] <==
	Trace[1323641632]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:58164->10.96.0.1:443: read: connection reset by peer 10275ms (19:31:11.375)
	Trace[1323641632]: [10.2758202s] [10.2758202s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:58164->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [b9a32cfd9baece55551b1df51ab0ed27b91b91c878d698f5862fff2fbf70d0a3] <==
	[INFO] 10.244.2.2:37785 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000112105s
	[INFO] 10.244.0.4:34398 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000118394s
	[INFO] 10.244.0.4:35218 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001965777s
	[INFO] 10.244.1.2:56827 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00018086s
	[INFO] 10.244.1.2:50439 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003922693s
	[INFO] 10.244.2.2:33611 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123417s
	[INFO] 10.244.2.2:37877 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204398s
	[INFO] 10.244.2.2:42894 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164711s
	[INFO] 10.244.0.4:58512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012749s
	[INFO] 10.244.0.4:60496 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126088s
	[INFO] 10.244.0.4:42876 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054151s
	[INFO] 10.244.0.4:46048 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001023388s
	[INFO] 10.244.0.4:45307 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069619s
	[INFO] 10.244.0.4:54830 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086737s
	[INFO] 10.244.1.2:56566 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104818s
	[INFO] 10.244.2.2:44960 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017462s
	[INFO] 10.244.2.2:35520 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000147677s
	[INFO] 10.244.0.4:34887 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000089068s
	[INFO] 10.244.0.4:47038 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093137s
	[INFO] 10.244.1.2:44935 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000181924s
	[INFO] 10.244.2.2:51593 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000184246s
	[INFO] 10.244.2.2:37070 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101666s
	[INFO] 10.244.0.4:49420 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115127s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c598f8345f1d86d70694138db26efde7af8aab971a5fa89ba6ab6e7468c8191a] <==
	[INFO] 10.244.2.2:47614 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170182s
	[INFO] 10.244.2.2:52937 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001095974s
	[INFO] 10.244.2.2:59751 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106474s
	[INFO] 10.244.0.4:55786 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001514187s
	[INFO] 10.244.0.4:56387 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000050769s
	[INFO] 10.244.1.2:54787 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013733s
	[INFO] 10.244.1.2:58281 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113165s
	[INFO] 10.244.1.2:48712 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097722s
	[INFO] 10.244.2.2:57237 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152523s
	[INFO] 10.244.2.2:47314 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106445s
	[INFO] 10.244.0.4:43887 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000199016s
	[INFO] 10.244.0.4:49901 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000240769s
	[INFO] 10.244.1.2:54100 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210259s
	[INFO] 10.244.1.2:60342 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000221646s
	[INFO] 10.244.1.2:33783 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000165277s
	[INFO] 10.244.2.2:45378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197846s
	[INFO] 10.244.2.2:33324 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101556s
	[INFO] 10.244.0.4:40016 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000071122s
	[INFO] 10.244.0.4:40114 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135338s
	[INFO] 10.244.0.4:53904 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00006854s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e64eec86b7056617efbd7c498622e3f5958857bac4d73be6899dbf8db5c89cf9] <==
	Trace[2098793411]: [17.084408074s] [17.084408074s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41256->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41256->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-193737
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-193737
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=ha-193737
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T19_20_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:20:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-193737
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:35:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:31:31 +0000   Tue, 01 Oct 2024 19:20:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:31:31 +0000   Tue, 01 Oct 2024 19:20:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:31:31 +0000   Tue, 01 Oct 2024 19:20:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:31:31 +0000   Tue, 01 Oct 2024 19:20:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    ha-193737
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 008c1ccd624b4ab3b90055ff9f65b018
	  System UUID:                008c1ccd-624b-4ab3-b900-55ff9f65b018
	  Boot ID:                    ad12c9f1-7a18-4d35-9ec9-00d91da3365b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rbjkx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-hd5hv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-v2wsx             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-ha-193737                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-wnr6g                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-193737             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-193737    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-zpsll                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-193737             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-193737                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m26s                  kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-193737 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-193737 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-193737 status is now: NodeHasSufficientMemory
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     15m                    kubelet          Node ha-193737 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  15m                    kubelet          Node ha-193737 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m                    kubelet          Node ha-193737 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           15m                    node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-193737 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	  Warning  ContainerGCFailed        5m29s (x2 over 6m29s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m17s (x3 over 6m6s)   kubelet          Node ha-193737 status is now: NodeNotReady
	  Normal   RegisteredNode           4m31s                  node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	  Normal   RegisteredNode           4m19s                  node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	  Normal   RegisteredNode           3m19s                  node-controller  Node ha-193737 event: Registered Node ha-193737 in Controller
	
	
	Name:               ha-193737-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-193737-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=ha-193737
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T19_21_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:21:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-193737-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:35:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:32:14 +0000   Tue, 01 Oct 2024 19:31:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:32:14 +0000   Tue, 01 Oct 2024 19:31:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:32:14 +0000   Tue, 01 Oct 2024 19:31:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:32:14 +0000   Tue, 01 Oct 2024 19:31:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-193737-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e20c76476d7c4acaa5fd75e5b8fa3bab
	  System UUID:                e20c7647-6d7c-4aca-a5fd-75e5b8fa3bab
	  Boot ID:                    8c6fc033-e543-4e30-847d-834cbaf17d73
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fz5bb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-193737-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-drdlr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-193737-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-193737-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-4294m                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-193737-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-193737-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m8s                   kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-193737-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-193737-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-193737-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                    node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	  Normal  NodeNotReady             10m                    node-controller  Node ha-193737-m02 status is now: NodeNotReady
	  Normal  Starting                 4m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m55s (x8 over 4m55s)  kubelet          Node ha-193737-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m55s (x8 over 4m55s)  kubelet          Node ha-193737-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m55s (x7 over 4m55s)  kubelet          Node ha-193737-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m31s                  node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	  Normal  RegisteredNode           3m19s                  node-controller  Node ha-193737-m02 event: Registered Node ha-193737-m02 in Controller
	
	
	Name:               ha-193737-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-193737-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=ha-193737
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T19_23_47_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:23:46 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-193737-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:33:33 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 01 Oct 2024 19:33:12 +0000   Tue, 01 Oct 2024 19:34:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 01 Oct 2024 19:33:12 +0000   Tue, 01 Oct 2024 19:34:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 01 Oct 2024 19:33:12 +0000   Tue, 01 Oct 2024 19:34:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 01 Oct 2024 19:33:12 +0000   Tue, 01 Oct 2024 19:34:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.152
	  Hostname:    ha-193737-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef1097b5e0604ff19d7361f2921010b9
	  System UUID:                ef1097b5-e060-4ff1-9d73-61f2921010b9
	  Boot ID:                    2719bddd-154a-445f-a3e2-5e319deb1327
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zpkhd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-h886q              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-hz2nn           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x3 over 12m)      kubelet          Node ha-193737-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x3 over 12m)      kubelet          Node ha-193737-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x3 over 12m)      kubelet          Node ha-193737-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal   NodeReady                11m                    kubelet          Node ha-193737-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m31s                  node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal   RegisteredNode           4m19s                  node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal   RegisteredNode           3m19s                  node-controller  Node ha-193737-m04 event: Registered Node ha-193737-m04 in Controller
	  Normal   NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet          Node ha-193737-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m47s                  kubelet          Node ha-193737-m04 has been rebooted, boot id: 2719bddd-154a-445f-a3e2-5e319deb1327
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet          Node ha-193737-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet          Node ha-193737-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                2m47s                  kubelet          Node ha-193737-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s (x2 over 3m51s)   node-controller  Node ha-193737-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.804167] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.059657] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065329] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.157689] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.148971] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.256595] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +3.897654] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +5.026995] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.059544] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.061605] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.119912] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.150839] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.375138] kauditd_printk_skb: 38 callbacks suppressed
	[Oct 1 19:21] kauditd_printk_skb: 24 callbacks suppressed
	[Oct 1 19:30] systemd-fstab-generator[3686]: Ignoring "noauto" option for root device
	[  +0.146997] systemd-fstab-generator[3698]: Ignoring "noauto" option for root device
	[  +0.178415] systemd-fstab-generator[3712]: Ignoring "noauto" option for root device
	[  +0.155876] systemd-fstab-generator[3724]: Ignoring "noauto" option for root device
	[  +0.286412] systemd-fstab-generator[3752]: Ignoring "noauto" option for root device
	[  +0.739524] systemd-fstab-generator[3849]: Ignoring "noauto" option for root device
	[  +5.551763] kauditd_printk_skb: 122 callbacks suppressed
	[Oct 1 19:31] kauditd_printk_skb: 85 callbacks suppressed
	[  +5.445779] kauditd_printk_skb: 2 callbacks suppressed
	[ +25.734051] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [7092a3841df088f68cbc08587792dbde5f29e48ed792054959328dc83401229e] <==
	{"level":"info","ts":"2024-10-01T19:29:09.867822Z","caller":"traceutil/trace.go:171","msg":"trace[1059982064] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; }","duration":"735.738205ms","start":"2024-10-01T19:29:09.132078Z","end":"2024-10-01T19:29:09.867816Z","steps":["trace[1059982064] 'agreement among raft nodes before linearized reading'  (duration: 735.687491ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T19:29:09.867851Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T19:29:09.132068Z","time spent":"735.77607ms","remote":"127.0.0.1:42554","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" limit:10000 "}
	2024/10/01 19:29:09 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-10-01T19:29:09.998558Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.14:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-01T19:29:09.998685Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.14:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-01T19:29:10.000488Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"599035dfeb7e0476","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-10-01T19:29:10.000787Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"be719cfe4c1d88a"}
	{"level":"info","ts":"2024-10-01T19:29:10.000825Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"be719cfe4c1d88a"}
	{"level":"info","ts":"2024-10-01T19:29:10.000859Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"be719cfe4c1d88a"}
	{"level":"info","ts":"2024-10-01T19:29:10.000972Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a"}
	{"level":"info","ts":"2024-10-01T19:29:10.001017Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a"}
	{"level":"info","ts":"2024-10-01T19:29:10.001063Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"599035dfeb7e0476","remote-peer-id":"be719cfe4c1d88a"}
	{"level":"info","ts":"2024-10-01T19:29:10.001074Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"be719cfe4c1d88a"}
	{"level":"info","ts":"2024-10-01T19:29:10.001080Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:29:10.001092Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:29:10.001124Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:29:10.001213Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:29:10.001253Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:29:10.001305Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:29:10.001316Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:29:10.004454Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.14:2380"}
	{"level":"warn","ts":"2024-10-01T19:29:10.004487Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.868666744s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-10-01T19:29:10.004638Z","caller":"traceutil/trace.go:171","msg":"trace[25566447] range","detail":"{range_begin:; range_end:; }","duration":"8.868838929s","start":"2024-10-01T19:29:01.135783Z","end":"2024-10-01T19:29:10.004622Z","steps":["trace[25566447] 'agreement among raft nodes before linearized reading'  (duration: 8.868664936s)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T19:29:10.004685Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.14:2380"}
	{"level":"info","ts":"2024-10-01T19:29:10.004772Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-193737","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.14:2380"],"advertise-client-urls":["https://192.168.39.14:2379"]}
	
	
	==> etcd [d10a89edf2195041bf7b272302c3c39e01a5af56118e22a81c8e75031db83b8b] <==
	{"level":"info","ts":"2024-10-01T19:32:26.696520Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:32:26.708449Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"599035dfeb7e0476","to":"e0aed16a49605245","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-10-01T19:32:26.708510Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:32:26.709345Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"599035dfeb7e0476","to":"e0aed16a49605245","stream-type":"stream Message"}
	{"level":"info","ts":"2024-10-01T19:32:26.709421Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245"}
	{"level":"warn","ts":"2024-10-01T19:33:25.625636Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.101:59456","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-10-01T19:33:25.639547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 switched to configuration voters=(857682634724202634 6453717501866804342)"}
	{"level":"info","ts":"2024-10-01T19:33:25.642181Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"7dcc0a60dbbc15a1","local-member-id":"599035dfeb7e0476","removed-remote-peer-id":"e0aed16a49605245","removed-remote-peer-urls":["https://192.168.39.101:2380"]}
	{"level":"info","ts":"2024-10-01T19:33:25.642343Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e0aed16a49605245"}
	{"level":"warn","ts":"2024-10-01T19:33:25.642444Z","caller":"etcdserver/server.go:987","msg":"rejected Raft message from removed member","local-member-id":"599035dfeb7e0476","removed-member-id":"e0aed16a49605245"}
	{"level":"warn","ts":"2024-10-01T19:33:25.642535Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-10-01T19:33:25.642874Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:33:25.642949Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e0aed16a49605245"}
	{"level":"warn","ts":"2024-10-01T19:33:25.643220Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:33:25.643303Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:33:25.643559Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245"}
	{"level":"warn","ts":"2024-10-01T19:33:25.643831Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245","error":"context canceled"}
	{"level":"warn","ts":"2024-10-01T19:33:25.643920Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"e0aed16a49605245","error":"failed to read e0aed16a49605245 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-10-01T19:33:25.643988Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245"}
	{"level":"warn","ts":"2024-10-01T19:33:25.644265Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245","error":"context canceled"}
	{"level":"info","ts":"2024-10-01T19:33:25.644342Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"599035dfeb7e0476","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:33:25.644384Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e0aed16a49605245"}
	{"level":"info","ts":"2024-10-01T19:33:25.644428Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"599035dfeb7e0476","removed-remote-peer-id":"e0aed16a49605245"}
	{"level":"warn","ts":"2024-10-01T19:33:25.651951Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"599035dfeb7e0476","remote-peer-id-stream-handler":"599035dfeb7e0476","remote-peer-id-from":"e0aed16a49605245"}
	{"level":"warn","ts":"2024-10-01T19:33:25.657111Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"599035dfeb7e0476","remote-peer-id-stream-handler":"599035dfeb7e0476","remote-peer-id-from":"e0aed16a49605245"}
	
	
	==> kernel <==
	 19:35:59 up 16 min,  0 users,  load average: 0.05, 0.29, 0.28
	Linux ha-193737 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [25b91984e532b8571863aa92120cdc813f23e6394cbbdbe3f30aacdaa98bd525] <==
	I1001 19:28:38.345531       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:28:48.345611       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I1001 19:28:48.345677       1 main.go:322] Node ha-193737-m03 has CIDR [10.244.2.0/24] 
	I1001 19:28:48.345941       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:28:48.345961       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:28:48.346018       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:28:48.346034       1 main.go:299] handling current node
	I1001 19:28:48.346045       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:28:48.346050       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:28:58.354002       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:28:58.354056       1 main.go:299] handling current node
	I1001 19:28:58.354085       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:28:58.354092       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:28:58.354304       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I1001 19:28:58.354320       1 main.go:322] Node ha-193737-m03 has CIDR [10.244.2.0/24] 
	I1001 19:28:58.354370       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:28:58.354375       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:29:08.354044       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:29:08.354147       1 main.go:299] handling current node
	I1001 19:29:08.354184       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:29:08.354190       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:29:08.354329       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I1001 19:29:08.354348       1 main.go:322] Node ha-193737-m03 has CIDR [10.244.2.0/24] 
	I1001 19:29:08.354421       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:29:08.354437       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [61ff92cf26d6e3aed32887454ab7d2058de0b8a5e7ea7861f48b1d01a5939727] <==
	I1001 19:35:10.064927       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:35:20.072842       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:35:20.073073       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:35:20.073323       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:35:20.073348       1 main.go:299] handling current node
	I1001 19:35:20.073386       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:35:20.073403       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:35:30.071647       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:35:30.071705       1 main.go:299] handling current node
	I1001 19:35:30.071770       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:35:30.071780       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:35:30.071985       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:35:30.072016       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:35:40.064139       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:35:40.064248       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:35:40.064463       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:35:40.064495       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	I1001 19:35:40.064566       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:35:40.064586       1 main.go:299] handling current node
	I1001 19:35:50.064298       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I1001 19:35:50.064375       1 main.go:299] handling current node
	I1001 19:35:50.064390       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I1001 19:35:50.064395       1 main.go:322] Node ha-193737-m02 has CIDR [10.244.1.0/24] 
	I1001 19:35:50.064609       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1001 19:35:50.064628       1 main.go:322] Node ha-193737-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [3382226e00b6ef4a63086f6faaee763c7a138978d0ef813c494eb8ffc1d02c5f] <==
	I1001 19:30:49.576316       1 options.go:228] external host was not specified, using 192.168.39.14
	I1001 19:30:49.584942       1 server.go:142] Version: v1.31.1
	I1001 19:30:49.585046       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:30:50.301863       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1001 19:30:50.357816       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1001 19:30:50.362894       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1001 19:30:50.362927       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1001 19:30:50.363226       1 instance.go:232] Using reconciler: lease
	W1001 19:31:10.301595       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1001 19:31:10.301596       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1001 19:31:10.370298       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [cc73e66125bdf484da9d957113d2dbcd22b1cca191ae53bdd53cddf4df26a9b4] <==
	I1001 19:31:37.189083       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1001 19:31:37.189121       1 policy_source.go:224] refreshing policies
	I1001 19:31:37.206958       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1001 19:31:37.229166       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1001 19:31:37.234774       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1001 19:31:37.234855       1 aggregator.go:171] initial CRD sync complete...
	I1001 19:31:37.234871       1 autoregister_controller.go:144] Starting autoregister controller
	I1001 19:31:37.234877       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1001 19:31:37.234883       1 cache.go:39] Caches are synced for autoregister controller
	I1001 19:31:37.235574       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1001 19:31:37.236173       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1001 19:31:37.236202       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1001 19:31:37.236931       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1001 19:31:37.237042       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1001 19:31:37.237265       1 shared_informer.go:320] Caches are synced for configmaps
	I1001 19:31:37.238948       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W1001 19:31:37.244200       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.27]
	I1001 19:31:37.245455       1 controller.go:615] quota admission added evaluator for: endpoints
	I1001 19:31:37.253871       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1001 19:31:37.257867       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1001 19:31:37.278989       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1001 19:31:38.134497       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1001 19:31:38.574995       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.14 192.168.39.27]
	W1001 19:32:38.576958       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.101 192.168.39.14 192.168.39.27]
	E1001 19:32:38.582416       1 controller.go:163] "Unhandled Error" err="unable to sync kubernetes service: Operation cannot be fulfilled on endpoints \"kubernetes\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-controller-manager [0355d034cef455e56a215145c9058bc9694f5da8c3c4c2172ae416d5af558add] <==
	I1001 19:34:13.960159       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:34:13.975154       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:34:14.054050       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="20.383267ms"
	I1001 19:34:14.054271       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.869µs"
	I1001 19:34:15.945771       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	I1001 19:34:19.095479       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-193737-m04"
	E1001 19:34:20.854443       1 gc_controller.go:151] "Failed to get node" err="node \"ha-193737-m03\" not found" logger="pod-garbage-collector-controller" node="ha-193737-m03"
	E1001 19:34:20.854481       1 gc_controller.go:151] "Failed to get node" err="node \"ha-193737-m03\" not found" logger="pod-garbage-collector-controller" node="ha-193737-m03"
	E1001 19:34:20.854488       1 gc_controller.go:151] "Failed to get node" err="node \"ha-193737-m03\" not found" logger="pod-garbage-collector-controller" node="ha-193737-m03"
	E1001 19:34:20.854494       1 gc_controller.go:151] "Failed to get node" err="node \"ha-193737-m03\" not found" logger="pod-garbage-collector-controller" node="ha-193737-m03"
	E1001 19:34:20.854498       1 gc_controller.go:151] "Failed to get node" err="node \"ha-193737-m03\" not found" logger="pod-garbage-collector-controller" node="ha-193737-m03"
	I1001 19:34:20.865739       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-193737-m03"
	I1001 19:34:20.902513       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-193737-m03"
	I1001 19:34:20.902673       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-193737-m03"
	I1001 19:34:20.928290       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-193737-m03"
	I1001 19:34:20.928330       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-193737-m03"
	I1001 19:34:20.953614       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-193737-m03"
	I1001 19:34:20.953884       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-bqht8"
	I1001 19:34:20.982523       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-bqht8"
	I1001 19:34:20.982934       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-193737-m03"
	I1001 19:34:21.023551       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-193737-m03"
	I1001 19:34:21.023896       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-9pm4t"
	I1001 19:34:21.055081       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-9pm4t"
	I1001 19:34:21.055117       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-193737-m03"
	I1001 19:34:21.082906       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-193737-m03"
	
	
	==> kube-controller-manager [95bc5dbd279eda6d388ec30e614a300b3e7377edf477e235fd68a408b5928575] <==
	I1001 19:30:50.517964       1 serving.go:386] Generated self-signed cert in-memory
	I1001 19:30:50.838702       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I1001 19:30:50.838844       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:30:50.840571       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1001 19:30:50.840800       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1001 19:30:50.841759       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1001 19:30:50.842333       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1001 19:31:11.377063       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.14:8443/healthz\": dial tcp 192.168.39.14:8443: connect: connection refused"
	
	
	==> kube-proxy [6ce5a1ca06729791a2b24b6dccf36f1cf98d23a263a6907081cd8f4885f9610c] <==
	E1001 19:28:04.776620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-193737&resourceVersion=1760\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:07.846194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:07.846776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1831\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:07.846992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:07.847057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1760\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:07.847329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-193737&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:07.847806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-193737&resourceVersion=1760\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:13.991545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:13.991601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1760\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:13.991689       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:13.991753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1831\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:13.991826       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-193737&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:13.991861       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-193737&resourceVersion=1760\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:23.207233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:23.207411       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1831\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:23.207557       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-193737&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:23.207600       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-193737&resourceVersion=1760\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:26.279074       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:26.279198       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1760\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:38.566889       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-193737&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:38.567082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-193737&resourceVersion=1760\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:44.712231       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1831": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:44.712915       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1831\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1001 19:28:47.786071       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E1001 19:28:47.791881       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1760\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [82c0e82b0f6c09d87ec13c643737a4ccf5b340e9502946d56bbb217eb96dbe93] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 19:30:50.666872       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-193737\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E1001 19:30:53.734927       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-193737\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E1001 19:30:56.807382       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-193737\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E1001 19:31:02.950263       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-193737\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E1001 19:31:15.239758       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-193737\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I1001 19:31:32.767297       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.14"]
	E1001 19:31:32.767596       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 19:31:32.817907       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 19:31:32.818028       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 19:31:32.818073       1 server_linux.go:169] "Using iptables Proxier"
	I1001 19:31:32.822309       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 19:31:32.822898       1 server.go:483] "Version info" version="v1.31.1"
	I1001 19:31:32.823310       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:31:32.827092       1 config.go:199] "Starting service config controller"
	I1001 19:31:32.827249       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 19:31:32.827354       1 config.go:105] "Starting endpoint slice config controller"
	I1001 19:31:32.827434       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 19:31:32.828915       1 config.go:328] "Starting node config controller"
	I1001 19:31:32.829066       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 19:31:32.928570       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 19:31:32.928625       1 shared_informer.go:320] Caches are synced for service config
	I1001 19:31:32.929159       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [83950c035f12e98c6757b6223f78b7f5a39d863ec5c60eac7c00e820c1c5c076] <==
	W1001 19:31:29.471299       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.14:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.14:8443: connect: connection refused
	E1001 19:31:29.471418       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.14:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.14:8443: connect: connection refused" logger="UnhandledError"
	W1001 19:31:29.559295       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.14:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.14:8443: connect: connection refused
	E1001 19:31:29.559427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.14:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.14:8443: connect: connection refused" logger="UnhandledError"
	W1001 19:31:30.176066       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.14:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.14:8443: connect: connection refused
	E1001 19:31:30.176146       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.14:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.14:8443: connect: connection refused" logger="UnhandledError"
	W1001 19:31:30.955508       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.14:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.14:8443: connect: connection refused
	E1001 19:31:30.955620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.14:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.14:8443: connect: connection refused" logger="UnhandledError"
	W1001 19:31:31.015208       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.14:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.14:8443: connect: connection refused
	E1001 19:31:31.015296       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.14:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.14:8443: connect: connection refused" logger="UnhandledError"
	W1001 19:31:31.136450       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.14:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.14:8443: connect: connection refused
	E1001 19:31:31.136518       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.14:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.14:8443: connect: connection refused" logger="UnhandledError"
	W1001 19:31:31.619353       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.14:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.14:8443: connect: connection refused
	E1001 19:31:31.619410       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.14:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.14:8443: connect: connection refused" logger="UnhandledError"
	W1001 19:31:31.846120       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.14:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.14:8443: connect: connection refused
	E1001 19:31:31.846257       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.14:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.14:8443: connect: connection refused" logger="UnhandledError"
	W1001 19:31:32.824494       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.14:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.14:8443: connect: connection refused
	E1001 19:31:32.824636       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.14:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.14:8443: connect: connection refused" logger="UnhandledError"
	W1001 19:31:33.558477       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.14:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.14:8443: connect: connection refused
	E1001 19:31:33.558622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.14:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.14:8443: connect: connection refused" logger="UnhandledError"
	I1001 19:31:55.786797       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1001 19:33:22.414464       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-zpkhd\": pod busybox-7dff88458-zpkhd is already assigned to node \"ha-193737-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-zpkhd" node="ha-193737-m04"
	E1001 19:33:22.414616       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 17f70fea-f3ff-46cb-80ec-ee17d26a9c14(default/busybox-7dff88458-zpkhd) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-zpkhd"
	E1001 19:33:22.414658       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-zpkhd\": pod busybox-7dff88458-zpkhd is already assigned to node \"ha-193737-m04\"" pod="default/busybox-7dff88458-zpkhd"
	I1001 19:33:22.414688       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-zpkhd" node="ha-193737-m04"
	
	
	==> kube-scheduler [d7d722793679c0adf054005c0f4166c7306fbfeeb10594a6395130db2f366dd7] <==
	E1001 19:23:47.081864       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 785d6c85-2697-4f02-80a4-55483a0faa64(kube-system/kube-proxy-z5qhk) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-z5qhk"
	E1001 19:23:47.081920       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-z5qhk\": pod kube-proxy-z5qhk is already assigned to node \"ha-193737-m04\"" pod="kube-system/kube-proxy-z5qhk"
	I1001 19:23:47.083299       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-z5qhk" node="ha-193737-m04"
	E1001 19:23:47.138476       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4q2pc\": pod kindnet-4q2pc is already assigned to node \"ha-193737-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4q2pc" node="ha-193737-m04"
	E1001 19:23:47.138649       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f23b02a5-c64e-44c3-83b9-7192d19a6efc(kube-system/kindnet-4q2pc) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-4q2pc"
	E1001 19:23:47.138779       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4q2pc\": pod kindnet-4q2pc is already assigned to node \"ha-193737-m04\"" pod="kube-system/kindnet-4q2pc"
	I1001 19:23:47.138823       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-4q2pc" node="ha-193737-m04"
	E1001 19:28:48.062122       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E1001 19:28:56.022025       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E1001 19:28:56.308818       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E1001 19:28:59.265228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E1001 19:28:59.619153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E1001 19:28:59.734866       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E1001 19:29:00.233835       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E1001 19:29:00.942769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E1001 19:29:02.996497       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E1001 19:29:04.540046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E1001 19:29:05.100416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E1001 19:29:05.312911       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E1001 19:29:05.856067       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E1001 19:29:07.264696       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E1001 19:29:09.117591       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	W1001 19:29:09.269390       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 19:29:09.269441       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1001 19:29:09.826057       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 01 19:34:31 ha-193737 kubelet[1313]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 19:34:31 ha-193737 kubelet[1313]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 19:34:31 ha-193737 kubelet[1313]: E1001 19:34:31.236440    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811271235875289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:34:31 ha-193737 kubelet[1313]: E1001 19:34:31.236473    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811271235875289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:34:41 ha-193737 kubelet[1313]: E1001 19:34:41.238765    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811281238368128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:34:41 ha-193737 kubelet[1313]: E1001 19:34:41.239084    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811281238368128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:34:51 ha-193737 kubelet[1313]: E1001 19:34:51.241528    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811291241104634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:34:51 ha-193737 kubelet[1313]: E1001 19:34:51.241958    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811291241104634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:35:01 ha-193737 kubelet[1313]: E1001 19:35:01.244688    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811301244263934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:35:01 ha-193737 kubelet[1313]: E1001 19:35:01.244791    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811301244263934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:35:11 ha-193737 kubelet[1313]: E1001 19:35:11.247938    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811311247287504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:35:11 ha-193737 kubelet[1313]: E1001 19:35:11.248388    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811311247287504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:35:21 ha-193737 kubelet[1313]: E1001 19:35:21.250771    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811321250366169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:35:21 ha-193737 kubelet[1313]: E1001 19:35:21.251194    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811321250366169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:35:31 ha-193737 kubelet[1313]: E1001 19:35:31.045860    1313 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 19:35:31 ha-193737 kubelet[1313]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 19:35:31 ha-193737 kubelet[1313]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 19:35:31 ha-193737 kubelet[1313]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 19:35:31 ha-193737 kubelet[1313]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 19:35:31 ha-193737 kubelet[1313]: E1001 19:35:31.253641    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811331253259958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:35:31 ha-193737 kubelet[1313]: E1001 19:35:31.253666    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811331253259958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:35:41 ha-193737 kubelet[1313]: E1001 19:35:41.256163    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811341255789037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:35:41 ha-193737 kubelet[1313]: E1001 19:35:41.256599    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811341255789037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:35:51 ha-193737 kubelet[1313]: E1001 19:35:51.258506    1313 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811351258067118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:35:51 ha-193737 kubelet[1313]: E1001 19:35:51.258941    1313 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727811351258067118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 19:35:58.776632   39817 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19736-11198/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-193737 -n ha-193737
helpers_test.go:261: (dbg) Run:  kubectl --context ha-193737 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (328.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-325713
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-325713
E1001 19:50:02.093745   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-325713: exit status 82 (2m1.800147738s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-325713-m03"  ...
	* Stopping node "multinode-325713-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-325713" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-325713 --wait=true -v=8 --alsologtostderr
E1001 19:51:34.845288   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:51:59.027639   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-325713 --wait=true -v=8 --alsologtostderr: (3m24.675176884s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-325713
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-325713 -n multinode-325713
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-325713 logs -n 25: (1.398988971s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-325713 ssh -n                                                                 | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-325713 cp multinode-325713-m02:/home/docker/cp-test.txt                       | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile187864513/001/cp-test_multinode-325713-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n                                                                 | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-325713 cp multinode-325713-m02:/home/docker/cp-test.txt                       | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713:/home/docker/cp-test_multinode-325713-m02_multinode-325713.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n                                                                 | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n multinode-325713 sudo cat                                       | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | /home/docker/cp-test_multinode-325713-m02_multinode-325713.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-325713 cp multinode-325713-m02:/home/docker/cp-test.txt                       | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m03:/home/docker/cp-test_multinode-325713-m02_multinode-325713-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n                                                                 | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n multinode-325713-m03 sudo cat                                   | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | /home/docker/cp-test_multinode-325713-m02_multinode-325713-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-325713 cp testdata/cp-test.txt                                                | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n                                                                 | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-325713 cp multinode-325713-m03:/home/docker/cp-test.txt                       | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile187864513/001/cp-test_multinode-325713-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n                                                                 | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-325713 cp multinode-325713-m03:/home/docker/cp-test.txt                       | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713:/home/docker/cp-test_multinode-325713-m03_multinode-325713.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n                                                                 | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n multinode-325713 sudo cat                                       | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | /home/docker/cp-test_multinode-325713-m03_multinode-325713.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-325713 cp multinode-325713-m03:/home/docker/cp-test.txt                       | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m02:/home/docker/cp-test_multinode-325713-m03_multinode-325713-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n                                                                 | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n multinode-325713-m02 sudo cat                                   | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | /home/docker/cp-test_multinode-325713-m03_multinode-325713-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-325713 node stop m03                                                          | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	| node    | multinode-325713 node start                                                             | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:49 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-325713                                                                | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:49 UTC |                     |
	| stop    | -p multinode-325713                                                                     | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:49 UTC |                     |
	| start   | -p multinode-325713                                                                     | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:51 UTC | 01 Oct 24 19:54 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-325713                                                                | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:54 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 19:51:10
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 19:51:10.736246   48985 out.go:345] Setting OutFile to fd 1 ...
	I1001 19:51:10.736403   48985 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:51:10.736413   48985 out.go:358] Setting ErrFile to fd 2...
	I1001 19:51:10.736417   48985 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:51:10.736620   48985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 19:51:10.737172   48985 out.go:352] Setting JSON to false
	I1001 19:51:10.738064   48985 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5613,"bootTime":1727806658,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 19:51:10.738163   48985 start.go:139] virtualization: kvm guest
	I1001 19:51:10.740050   48985 out.go:177] * [multinode-325713] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 19:51:10.741271   48985 notify.go:220] Checking for updates...
	I1001 19:51:10.741281   48985 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 19:51:10.742452   48985 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 19:51:10.743588   48985 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:51:10.744620   48985 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:51:10.745680   48985 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 19:51:10.747028   48985 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 19:51:10.748532   48985 config.go:182] Loaded profile config "multinode-325713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:51:10.748638   48985 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 19:51:10.749098   48985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:51:10.749152   48985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:51:10.763932   48985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I1001 19:51:10.764461   48985 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:51:10.765060   48985 main.go:141] libmachine: Using API Version  1
	I1001 19:51:10.765083   48985 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:51:10.765429   48985 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:51:10.765585   48985 main.go:141] libmachine: (multinode-325713) Calling .DriverName
	I1001 19:51:10.803278   48985 out.go:177] * Using the kvm2 driver based on existing profile
	I1001 19:51:10.804489   48985 start.go:297] selected driver: kvm2
	I1001 19:51:10.804516   48985 start.go:901] validating driver "kvm2" against &{Name:multinode-325713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:multinode-325713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.61 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:51:10.804731   48985 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 19:51:10.805310   48985 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 19:51:10.805427   48985 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 19:51:10.821034   48985 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 19:51:10.821863   48985 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 19:51:10.821905   48985 cni.go:84] Creating CNI manager for ""
	I1001 19:51:10.821955   48985 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1001 19:51:10.822021   48985 start.go:340] cluster config:
	{Name:multinode-325713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-325713 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.61 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:51:10.822149   48985 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 19:51:10.823884   48985 out.go:177] * Starting "multinode-325713" primary control-plane node in "multinode-325713" cluster
	I1001 19:51:10.824949   48985 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:51:10.825005   48985 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 19:51:10.825023   48985 cache.go:56] Caching tarball of preloaded images
	I1001 19:51:10.825171   48985 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 19:51:10.825197   48985 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 19:51:10.825387   48985 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/config.json ...
	I1001 19:51:10.825646   48985 start.go:360] acquireMachinesLock for multinode-325713: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 19:51:10.825700   48985 start.go:364] duration metric: took 28.217µs to acquireMachinesLock for "multinode-325713"
	I1001 19:51:10.825714   48985 start.go:96] Skipping create...Using existing machine configuration
	I1001 19:51:10.825721   48985 fix.go:54] fixHost starting: 
	I1001 19:51:10.826028   48985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:51:10.826063   48985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:51:10.840709   48985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33405
	I1001 19:51:10.841082   48985 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:51:10.841571   48985 main.go:141] libmachine: Using API Version  1
	I1001 19:51:10.841594   48985 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:51:10.841924   48985 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:51:10.842150   48985 main.go:141] libmachine: (multinode-325713) Calling .DriverName
	I1001 19:51:10.842358   48985 main.go:141] libmachine: (multinode-325713) Calling .GetState
	I1001 19:51:10.844197   48985 fix.go:112] recreateIfNeeded on multinode-325713: state=Running err=<nil>
	W1001 19:51:10.844236   48985 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 19:51:10.846035   48985 out.go:177] * Updating the running kvm2 "multinode-325713" VM ...
	I1001 19:51:10.847172   48985 machine.go:93] provisionDockerMachine start ...
	I1001 19:51:10.847197   48985 main.go:141] libmachine: (multinode-325713) Calling .DriverName
	I1001 19:51:10.847424   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHHostname
	I1001 19:51:10.850357   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:10.850913   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:51:10.850952   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:10.851128   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHPort
	I1001 19:51:10.851322   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:51:10.851493   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:51:10.851661   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHUsername
	I1001 19:51:10.851842   48985 main.go:141] libmachine: Using SSH client type: native
	I1001 19:51:10.852029   48985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I1001 19:51:10.852041   48985 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 19:51:10.969781   48985 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-325713
	
	I1001 19:51:10.969808   48985 main.go:141] libmachine: (multinode-325713) Calling .GetMachineName
	I1001 19:51:10.970083   48985 buildroot.go:166] provisioning hostname "multinode-325713"
	I1001 19:51:10.970113   48985 main.go:141] libmachine: (multinode-325713) Calling .GetMachineName
	I1001 19:51:10.970325   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHHostname
	I1001 19:51:10.973103   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:10.973557   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:51:10.973584   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:10.973726   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHPort
	I1001 19:51:10.973962   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:51:10.974141   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:51:10.974287   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHUsername
	I1001 19:51:10.974415   48985 main.go:141] libmachine: Using SSH client type: native
	I1001 19:51:10.974584   48985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I1001 19:51:10.974596   48985 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-325713 && echo "multinode-325713" | sudo tee /etc/hostname
	I1001 19:51:11.105668   48985 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-325713
	
	I1001 19:51:11.105703   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHHostname
	I1001 19:51:11.109013   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:11.109414   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:51:11.109448   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:11.109617   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHPort
	I1001 19:51:11.109784   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:51:11.109964   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:51:11.110109   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHUsername
	I1001 19:51:11.110246   48985 main.go:141] libmachine: Using SSH client type: native
	I1001 19:51:11.110491   48985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I1001 19:51:11.110509   48985 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-325713' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-325713/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-325713' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 19:51:11.225617   48985 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:51:11.225657   48985 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 19:51:11.225703   48985 buildroot.go:174] setting up certificates
	I1001 19:51:11.225714   48985 provision.go:84] configureAuth start
	I1001 19:51:11.225728   48985 main.go:141] libmachine: (multinode-325713) Calling .GetMachineName
	I1001 19:51:11.226011   48985 main.go:141] libmachine: (multinode-325713) Calling .GetIP
	I1001 19:51:11.229092   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:11.229594   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:51:11.229624   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:11.229827   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHHostname
	I1001 19:51:11.232392   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:11.232794   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:51:11.232824   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:11.232988   48985 provision.go:143] copyHostCerts
	I1001 19:51:11.233016   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:51:11.233051   48985 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 19:51:11.233060   48985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:51:11.233128   48985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 19:51:11.233205   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:51:11.233222   48985 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 19:51:11.233228   48985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:51:11.233250   48985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 19:51:11.233308   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:51:11.233325   48985 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 19:51:11.233331   48985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:51:11.233353   48985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 19:51:11.233401   48985 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.multinode-325713 san=[127.0.0.1 192.168.39.165 localhost minikube multinode-325713]
	I1001 19:51:11.334843   48985 provision.go:177] copyRemoteCerts
	I1001 19:51:11.334897   48985 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 19:51:11.334919   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHHostname
	I1001 19:51:11.337914   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:11.338230   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:51:11.338261   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:11.338450   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHPort
	I1001 19:51:11.338642   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:51:11.338797   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHUsername
	I1001 19:51:11.338937   48985 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/multinode-325713/id_rsa Username:docker}
	I1001 19:51:11.430505   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 19:51:11.430569   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1001 19:51:11.456162   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 19:51:11.456250   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 19:51:11.486004   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 19:51:11.486087   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 19:51:11.511653   48985 provision.go:87] duration metric: took 285.917641ms to configureAuth
	I1001 19:51:11.511688   48985 buildroot.go:189] setting minikube options for container-runtime
	I1001 19:51:11.511934   48985 config.go:182] Loaded profile config "multinode-325713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:51:11.512027   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHHostname
	I1001 19:51:11.514911   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:11.515302   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:51:11.515332   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:11.515471   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHPort
	I1001 19:51:11.515653   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:51:11.515834   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:51:11.515986   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHUsername
	I1001 19:51:11.516153   48985 main.go:141] libmachine: Using SSH client type: native
	I1001 19:51:11.516353   48985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I1001 19:51:11.516387   48985 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 19:52:42.157687   48985 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 19:52:42.157718   48985 machine.go:96] duration metric: took 1m31.310530644s to provisionDockerMachine
	I1001 19:52:42.157730   48985 start.go:293] postStartSetup for "multinode-325713" (driver="kvm2")
	I1001 19:52:42.157741   48985 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 19:52:42.157756   48985 main.go:141] libmachine: (multinode-325713) Calling .DriverName
	I1001 19:52:42.158042   48985 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 19:52:42.158068   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHHostname
	I1001 19:52:42.161141   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:42.161584   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:52:42.161625   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:42.161786   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHPort
	I1001 19:52:42.161945   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:52:42.162083   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHUsername
	I1001 19:52:42.162195   48985 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/multinode-325713/id_rsa Username:docker}
	I1001 19:52:42.251747   48985 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 19:52:42.256158   48985 command_runner.go:130] > NAME=Buildroot
	I1001 19:52:42.256182   48985 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1001 19:52:42.256189   48985 command_runner.go:130] > ID=buildroot
	I1001 19:52:42.256195   48985 command_runner.go:130] > VERSION_ID=2023.02.9
	I1001 19:52:42.256202   48985 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1001 19:52:42.256242   48985 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 19:52:42.256255   48985 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 19:52:42.256319   48985 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 19:52:42.256429   48985 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 19:52:42.256441   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /etc/ssl/certs/184302.pem
	I1001 19:52:42.256568   48985 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 19:52:42.266325   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:52:42.289521   48985 start.go:296] duration metric: took 131.778857ms for postStartSetup
	I1001 19:52:42.289559   48985 fix.go:56] duration metric: took 1m31.463837736s for fixHost
	I1001 19:52:42.289581   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHHostname
	I1001 19:52:42.292123   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:42.292802   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:52:42.292838   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:42.293011   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHPort
	I1001 19:52:42.293184   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:52:42.293347   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:52:42.293474   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHUsername
	I1001 19:52:42.293620   48985 main.go:141] libmachine: Using SSH client type: native
	I1001 19:52:42.293850   48985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I1001 19:52:42.293869   48985 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 19:52:42.404923   48985 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727812362.386079271
	
	I1001 19:52:42.404950   48985 fix.go:216] guest clock: 1727812362.386079271
	I1001 19:52:42.404960   48985 fix.go:229] Guest: 2024-10-01 19:52:42.386079271 +0000 UTC Remote: 2024-10-01 19:52:42.289564082 +0000 UTC m=+91.589958315 (delta=96.515189ms)
	I1001 19:52:42.405011   48985 fix.go:200] guest clock delta is within tolerance: 96.515189ms
	I1001 19:52:42.405023   48985 start.go:83] releasing machines lock for "multinode-325713", held for 1m31.579313815s
	I1001 19:52:42.405056   48985 main.go:141] libmachine: (multinode-325713) Calling .DriverName
	I1001 19:52:42.405314   48985 main.go:141] libmachine: (multinode-325713) Calling .GetIP
	I1001 19:52:42.408372   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:42.408735   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:52:42.408779   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:42.408951   48985 main.go:141] libmachine: (multinode-325713) Calling .DriverName
	I1001 19:52:42.409481   48985 main.go:141] libmachine: (multinode-325713) Calling .DriverName
	I1001 19:52:42.409657   48985 main.go:141] libmachine: (multinode-325713) Calling .DriverName
	I1001 19:52:42.409767   48985 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 19:52:42.409807   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHHostname
	I1001 19:52:42.409921   48985 ssh_runner.go:195] Run: cat /version.json
	I1001 19:52:42.409944   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHHostname
	I1001 19:52:42.412688   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:42.412710   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:42.413067   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:52:42.413094   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:42.413195   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHPort
	I1001 19:52:42.413233   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:52:42.413258   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:42.413357   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:52:42.413445   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHPort
	I1001 19:52:42.413518   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHUsername
	I1001 19:52:42.413576   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:52:42.413648   48985 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/multinode-325713/id_rsa Username:docker}
	I1001 19:52:42.413680   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHUsername
	I1001 19:52:42.413786   48985 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/multinode-325713/id_rsa Username:docker}
	I1001 19:52:42.531775   48985 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1001 19:52:42.532495   48985 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I1001 19:52:42.532649   48985 ssh_runner.go:195] Run: systemctl --version
	I1001 19:52:42.538741   48985 command_runner.go:130] > systemd 252 (252)
	I1001 19:52:42.538792   48985 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1001 19:52:42.538856   48985 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 19:52:42.695914   48985 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1001 19:52:42.702625   48985 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1001 19:52:42.703173   48985 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 19:52:42.703259   48985 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 19:52:42.713694   48985 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1001 19:52:42.713715   48985 start.go:495] detecting cgroup driver to use...
	I1001 19:52:42.713774   48985 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 19:52:42.729684   48985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 19:52:42.744592   48985 docker.go:217] disabling cri-docker service (if available) ...
	I1001 19:52:42.744650   48985 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 19:52:42.758763   48985 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 19:52:42.772847   48985 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 19:52:42.916228   48985 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 19:52:43.063910   48985 docker.go:233] disabling docker service ...
	I1001 19:52:43.063972   48985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 19:52:43.081069   48985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 19:52:43.094846   48985 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 19:52:43.235983   48985 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 19:52:43.375325   48985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 19:52:43.389733   48985 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 19:52:43.409229   48985 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1001 19:52:43.409276   48985 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 19:52:43.409330   48985 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:52:43.419916   48985 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 19:52:43.419986   48985 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:52:43.430540   48985 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:52:43.440765   48985 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:52:43.451073   48985 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 19:52:43.461793   48985 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:52:43.472503   48985 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:52:43.483966   48985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:52:43.494764   48985 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 19:52:43.503916   48985 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1001 19:52:43.504017   48985 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 19:52:43.513270   48985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:52:43.655462   48985 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 19:52:49.608874   48985 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.953372381s)
	I1001 19:52:49.608907   48985 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 19:52:49.608950   48985 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 19:52:49.613603   48985 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1001 19:52:49.613629   48985 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1001 19:52:49.613638   48985 command_runner.go:130] > Device: 0,22	Inode: 1321        Links: 1
	I1001 19:52:49.613647   48985 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1001 19:52:49.613652   48985 command_runner.go:130] > Access: 2024-10-01 19:52:49.494949773 +0000
	I1001 19:52:49.613659   48985 command_runner.go:130] > Modify: 2024-10-01 19:52:49.494949773 +0000
	I1001 19:52:49.613664   48985 command_runner.go:130] > Change: 2024-10-01 19:52:49.494949773 +0000
	I1001 19:52:49.613669   48985 command_runner.go:130] >  Birth: -
	I1001 19:52:49.613693   48985 start.go:563] Will wait 60s for crictl version
	I1001 19:52:49.613740   48985 ssh_runner.go:195] Run: which crictl
	I1001 19:52:49.617116   48985 command_runner.go:130] > /usr/bin/crictl
	I1001 19:52:49.617170   48985 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 19:52:49.654888   48985 command_runner.go:130] > Version:  0.1.0
	I1001 19:52:49.654914   48985 command_runner.go:130] > RuntimeName:  cri-o
	I1001 19:52:49.654921   48985 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1001 19:52:49.654928   48985 command_runner.go:130] > RuntimeApiVersion:  v1
	I1001 19:52:49.654944   48985 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 19:52:49.655012   48985 ssh_runner.go:195] Run: crio --version
	I1001 19:52:49.683006   48985 command_runner.go:130] > crio version 1.29.1
	I1001 19:52:49.683031   48985 command_runner.go:130] > Version:        1.29.1
	I1001 19:52:49.683037   48985 command_runner.go:130] > GitCommit:      unknown
	I1001 19:52:49.683042   48985 command_runner.go:130] > GitCommitDate:  unknown
	I1001 19:52:49.683046   48985 command_runner.go:130] > GitTreeState:   clean
	I1001 19:52:49.683052   48985 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1001 19:52:49.683056   48985 command_runner.go:130] > GoVersion:      go1.21.6
	I1001 19:52:49.683060   48985 command_runner.go:130] > Compiler:       gc
	I1001 19:52:49.683064   48985 command_runner.go:130] > Platform:       linux/amd64
	I1001 19:52:49.683067   48985 command_runner.go:130] > Linkmode:       dynamic
	I1001 19:52:49.683073   48985 command_runner.go:130] > BuildTags:      
	I1001 19:52:49.683077   48985 command_runner.go:130] >   containers_image_ostree_stub
	I1001 19:52:49.683081   48985 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1001 19:52:49.683084   48985 command_runner.go:130] >   btrfs_noversion
	I1001 19:52:49.683088   48985 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1001 19:52:49.683095   48985 command_runner.go:130] >   libdm_no_deferred_remove
	I1001 19:52:49.683100   48985 command_runner.go:130] >   seccomp
	I1001 19:52:49.683107   48985 command_runner.go:130] > LDFlags:          unknown
	I1001 19:52:49.683114   48985 command_runner.go:130] > SeccompEnabled:   true
	I1001 19:52:49.683121   48985 command_runner.go:130] > AppArmorEnabled:  false
	I1001 19:52:49.683195   48985 ssh_runner.go:195] Run: crio --version
	I1001 19:52:49.709814   48985 command_runner.go:130] > crio version 1.29.1
	I1001 19:52:49.709844   48985 command_runner.go:130] > Version:        1.29.1
	I1001 19:52:49.709851   48985 command_runner.go:130] > GitCommit:      unknown
	I1001 19:52:49.709857   48985 command_runner.go:130] > GitCommitDate:  unknown
	I1001 19:52:49.709861   48985 command_runner.go:130] > GitTreeState:   clean
	I1001 19:52:49.709867   48985 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1001 19:52:49.709873   48985 command_runner.go:130] > GoVersion:      go1.21.6
	I1001 19:52:49.709877   48985 command_runner.go:130] > Compiler:       gc
	I1001 19:52:49.709881   48985 command_runner.go:130] > Platform:       linux/amd64
	I1001 19:52:49.709885   48985 command_runner.go:130] > Linkmode:       dynamic
	I1001 19:52:49.709889   48985 command_runner.go:130] > BuildTags:      
	I1001 19:52:49.709893   48985 command_runner.go:130] >   containers_image_ostree_stub
	I1001 19:52:49.709897   48985 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1001 19:52:49.709901   48985 command_runner.go:130] >   btrfs_noversion
	I1001 19:52:49.709905   48985 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1001 19:52:49.709911   48985 command_runner.go:130] >   libdm_no_deferred_remove
	I1001 19:52:49.709915   48985 command_runner.go:130] >   seccomp
	I1001 19:52:49.709921   48985 command_runner.go:130] > LDFlags:          unknown
	I1001 19:52:49.709925   48985 command_runner.go:130] > SeccompEnabled:   true
	I1001 19:52:49.709930   48985 command_runner.go:130] > AppArmorEnabled:  false
	I1001 19:52:49.712655   48985 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 19:52:49.714066   48985 main.go:141] libmachine: (multinode-325713) Calling .GetIP
	I1001 19:52:49.716752   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:49.717108   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:52:49.717137   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:49.717326   48985 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 19:52:49.721372   48985 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1001 19:52:49.721574   48985 kubeadm.go:883] updating cluster {Name:multinode-325713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:multinode-325713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.61 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 19:52:49.721709   48985 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:52:49.721763   48985 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 19:52:49.758658   48985 command_runner.go:130] > {
	I1001 19:52:49.758685   48985 command_runner.go:130] >   "images": [
	I1001 19:52:49.758691   48985 command_runner.go:130] >     {
	I1001 19:52:49.758704   48985 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1001 19:52:49.758713   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.758723   48985 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1001 19:52:49.758728   48985 command_runner.go:130] >       ],
	I1001 19:52:49.758735   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.758748   48985 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1001 19:52:49.758763   48985 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1001 19:52:49.758769   48985 command_runner.go:130] >       ],
	I1001 19:52:49.758780   48985 command_runner.go:130] >       "size": "87190579",
	I1001 19:52:49.758790   48985 command_runner.go:130] >       "uid": null,
	I1001 19:52:49.758799   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.758811   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.758819   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.758824   48985 command_runner.go:130] >     },
	I1001 19:52:49.758829   48985 command_runner.go:130] >     {
	I1001 19:52:49.758841   48985 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1001 19:52:49.758850   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.758860   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1001 19:52:49.758869   48985 command_runner.go:130] >       ],
	I1001 19:52:49.758879   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.758906   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1001 19:52:49.758920   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1001 19:52:49.758924   48985 command_runner.go:130] >       ],
	I1001 19:52:49.758929   48985 command_runner.go:130] >       "size": "1363676",
	I1001 19:52:49.758937   48985 command_runner.go:130] >       "uid": null,
	I1001 19:52:49.758950   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.758959   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.758969   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.758977   48985 command_runner.go:130] >     },
	I1001 19:52:49.758989   48985 command_runner.go:130] >     {
	I1001 19:52:49.759000   48985 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1001 19:52:49.759009   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.759017   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1001 19:52:49.759022   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759031   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.759046   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1001 19:52:49.759062   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1001 19:52:49.759071   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759081   48985 command_runner.go:130] >       "size": "31470524",
	I1001 19:52:49.759090   48985 command_runner.go:130] >       "uid": null,
	I1001 19:52:49.759100   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.759107   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.759113   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.759121   48985 command_runner.go:130] >     },
	I1001 19:52:49.759128   48985 command_runner.go:130] >     {
	I1001 19:52:49.759137   48985 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1001 19:52:49.759146   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.759154   48985 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1001 19:52:49.759162   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759168   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.759182   48985 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1001 19:52:49.759203   48985 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1001 19:52:49.759212   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759222   48985 command_runner.go:130] >       "size": "63273227",
	I1001 19:52:49.759231   48985 command_runner.go:130] >       "uid": null,
	I1001 19:52:49.759238   48985 command_runner.go:130] >       "username": "nonroot",
	I1001 19:52:49.759248   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.759256   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.759264   48985 command_runner.go:130] >     },
	I1001 19:52:49.759269   48985 command_runner.go:130] >     {
	I1001 19:52:49.759280   48985 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1001 19:52:49.759289   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.759297   48985 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1001 19:52:49.759302   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759307   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.759316   48985 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1001 19:52:49.759324   48985 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1001 19:52:49.759333   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759337   48985 command_runner.go:130] >       "size": "149009664",
	I1001 19:52:49.759341   48985 command_runner.go:130] >       "uid": {
	I1001 19:52:49.759345   48985 command_runner.go:130] >         "value": "0"
	I1001 19:52:49.759351   48985 command_runner.go:130] >       },
	I1001 19:52:49.759354   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.759358   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.759364   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.759366   48985 command_runner.go:130] >     },
	I1001 19:52:49.759370   48985 command_runner.go:130] >     {
	I1001 19:52:49.759377   48985 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1001 19:52:49.759382   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.759387   48985 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1001 19:52:49.759392   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759396   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.759405   48985 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1001 19:52:49.759414   48985 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1001 19:52:49.759419   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759423   48985 command_runner.go:130] >       "size": "95237600",
	I1001 19:52:49.759434   48985 command_runner.go:130] >       "uid": {
	I1001 19:52:49.759445   48985 command_runner.go:130] >         "value": "0"
	I1001 19:52:49.759449   48985 command_runner.go:130] >       },
	I1001 19:52:49.759453   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.759457   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.759461   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.759464   48985 command_runner.go:130] >     },
	I1001 19:52:49.759467   48985 command_runner.go:130] >     {
	I1001 19:52:49.759473   48985 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1001 19:52:49.759479   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.759484   48985 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1001 19:52:49.759488   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759493   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.759501   48985 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1001 19:52:49.759510   48985 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1001 19:52:49.759516   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759520   48985 command_runner.go:130] >       "size": "89437508",
	I1001 19:52:49.759524   48985 command_runner.go:130] >       "uid": {
	I1001 19:52:49.759528   48985 command_runner.go:130] >         "value": "0"
	I1001 19:52:49.759531   48985 command_runner.go:130] >       },
	I1001 19:52:49.759535   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.759541   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.759545   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.759548   48985 command_runner.go:130] >     },
	I1001 19:52:49.759554   48985 command_runner.go:130] >     {
	I1001 19:52:49.759560   48985 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1001 19:52:49.759565   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.759570   48985 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1001 19:52:49.759573   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759577   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.759597   48985 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1001 19:52:49.759606   48985 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1001 19:52:49.759609   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759622   48985 command_runner.go:130] >       "size": "92733849",
	I1001 19:52:49.759628   48985 command_runner.go:130] >       "uid": null,
	I1001 19:52:49.759632   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.759635   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.759639   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.759643   48985 command_runner.go:130] >     },
	I1001 19:52:49.759648   48985 command_runner.go:130] >     {
	I1001 19:52:49.759657   48985 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1001 19:52:49.759663   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.759670   48985 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1001 19:52:49.759674   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759679   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.759694   48985 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1001 19:52:49.759708   48985 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1001 19:52:49.759716   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759722   48985 command_runner.go:130] >       "size": "68420934",
	I1001 19:52:49.759730   48985 command_runner.go:130] >       "uid": {
	I1001 19:52:49.759735   48985 command_runner.go:130] >         "value": "0"
	I1001 19:52:49.759740   48985 command_runner.go:130] >       },
	I1001 19:52:49.759746   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.759755   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.759761   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.759769   48985 command_runner.go:130] >     },
	I1001 19:52:49.759775   48985 command_runner.go:130] >     {
	I1001 19:52:49.759787   48985 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1001 19:52:49.759797   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.759804   48985 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1001 19:52:49.759811   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759815   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.759824   48985 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1001 19:52:49.759830   48985 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1001 19:52:49.759838   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759844   48985 command_runner.go:130] >       "size": "742080",
	I1001 19:52:49.759856   48985 command_runner.go:130] >       "uid": {
	I1001 19:52:49.759865   48985 command_runner.go:130] >         "value": "65535"
	I1001 19:52:49.759871   48985 command_runner.go:130] >       },
	I1001 19:52:49.759880   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.759886   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.759896   48985 command_runner.go:130] >       "pinned": true
	I1001 19:52:49.759901   48985 command_runner.go:130] >     }
	I1001 19:52:49.759910   48985 command_runner.go:130] >   ]
	I1001 19:52:49.759915   48985 command_runner.go:130] > }
	I1001 19:52:49.760127   48985 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 19:52:49.760148   48985 crio.go:433] Images already preloaded, skipping extraction
	I1001 19:52:49.760210   48985 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 19:52:49.791654   48985 command_runner.go:130] > {
	I1001 19:52:49.791675   48985 command_runner.go:130] >   "images": [
	I1001 19:52:49.791681   48985 command_runner.go:130] >     {
	I1001 19:52:49.791690   48985 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1001 19:52:49.791700   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.791712   48985 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1001 19:52:49.791718   48985 command_runner.go:130] >       ],
	I1001 19:52:49.791725   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.791748   48985 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1001 19:52:49.791763   48985 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1001 19:52:49.791770   48985 command_runner.go:130] >       ],
	I1001 19:52:49.791782   48985 command_runner.go:130] >       "size": "87190579",
	I1001 19:52:49.791789   48985 command_runner.go:130] >       "uid": null,
	I1001 19:52:49.791801   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.791823   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.791834   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.791839   48985 command_runner.go:130] >     },
	I1001 19:52:49.791844   48985 command_runner.go:130] >     {
	I1001 19:52:49.791851   48985 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1001 19:52:49.791858   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.791863   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1001 19:52:49.791870   48985 command_runner.go:130] >       ],
	I1001 19:52:49.791875   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.791885   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1001 19:52:49.791895   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1001 19:52:49.791903   48985 command_runner.go:130] >       ],
	I1001 19:52:49.791911   48985 command_runner.go:130] >       "size": "1363676",
	I1001 19:52:49.791916   48985 command_runner.go:130] >       "uid": null,
	I1001 19:52:49.791925   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.791930   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.791934   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.791940   48985 command_runner.go:130] >     },
	I1001 19:52:49.791944   48985 command_runner.go:130] >     {
	I1001 19:52:49.791954   48985 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1001 19:52:49.791959   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.791966   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1001 19:52:49.791972   48985 command_runner.go:130] >       ],
	I1001 19:52:49.791977   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.791987   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1001 19:52:49.792000   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1001 19:52:49.792006   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792011   48985 command_runner.go:130] >       "size": "31470524",
	I1001 19:52:49.792023   48985 command_runner.go:130] >       "uid": null,
	I1001 19:52:49.792027   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.792031   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.792037   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.792041   48985 command_runner.go:130] >     },
	I1001 19:52:49.792048   48985 command_runner.go:130] >     {
	I1001 19:52:49.792054   48985 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1001 19:52:49.792069   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.792074   48985 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1001 19:52:49.792077   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792081   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.792088   48985 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1001 19:52:49.792104   48985 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1001 19:52:49.792110   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792114   48985 command_runner.go:130] >       "size": "63273227",
	I1001 19:52:49.792118   48985 command_runner.go:130] >       "uid": null,
	I1001 19:52:49.792123   48985 command_runner.go:130] >       "username": "nonroot",
	I1001 19:52:49.792132   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.792136   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.792141   48985 command_runner.go:130] >     },
	I1001 19:52:49.792144   48985 command_runner.go:130] >     {
	I1001 19:52:49.792150   48985 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1001 19:52:49.792156   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.792161   48985 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1001 19:52:49.792164   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792168   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.792175   48985 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1001 19:52:49.792183   48985 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1001 19:52:49.792186   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792190   48985 command_runner.go:130] >       "size": "149009664",
	I1001 19:52:49.792195   48985 command_runner.go:130] >       "uid": {
	I1001 19:52:49.792198   48985 command_runner.go:130] >         "value": "0"
	I1001 19:52:49.792203   48985 command_runner.go:130] >       },
	I1001 19:52:49.792207   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.792211   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.792215   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.792218   48985 command_runner.go:130] >     },
	I1001 19:52:49.792222   48985 command_runner.go:130] >     {
	I1001 19:52:49.792229   48985 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1001 19:52:49.792233   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.792238   48985 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1001 19:52:49.792244   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792248   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.792255   48985 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1001 19:52:49.792264   48985 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1001 19:52:49.792268   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792271   48985 command_runner.go:130] >       "size": "95237600",
	I1001 19:52:49.792275   48985 command_runner.go:130] >       "uid": {
	I1001 19:52:49.792281   48985 command_runner.go:130] >         "value": "0"
	I1001 19:52:49.792289   48985 command_runner.go:130] >       },
	I1001 19:52:49.792294   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.792298   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.792301   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.792306   48985 command_runner.go:130] >     },
	I1001 19:52:49.792309   48985 command_runner.go:130] >     {
	I1001 19:52:49.792315   48985 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1001 19:52:49.792321   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.792326   48985 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1001 19:52:49.792330   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792334   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.792341   48985 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1001 19:52:49.792350   48985 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1001 19:52:49.792365   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792371   48985 command_runner.go:130] >       "size": "89437508",
	I1001 19:52:49.792376   48985 command_runner.go:130] >       "uid": {
	I1001 19:52:49.792380   48985 command_runner.go:130] >         "value": "0"
	I1001 19:52:49.792386   48985 command_runner.go:130] >       },
	I1001 19:52:49.792390   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.792394   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.792397   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.792400   48985 command_runner.go:130] >     },
	I1001 19:52:49.792404   48985 command_runner.go:130] >     {
	I1001 19:52:49.792412   48985 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1001 19:52:49.792416   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.792421   48985 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1001 19:52:49.792431   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792437   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.792450   48985 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1001 19:52:49.792460   48985 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1001 19:52:49.792463   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792467   48985 command_runner.go:130] >       "size": "92733849",
	I1001 19:52:49.792471   48985 command_runner.go:130] >       "uid": null,
	I1001 19:52:49.792475   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.792480   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.792484   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.792487   48985 command_runner.go:130] >     },
	I1001 19:52:49.792490   48985 command_runner.go:130] >     {
	I1001 19:52:49.792496   48985 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1001 19:52:49.792502   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.792507   48985 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1001 19:52:49.792511   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792515   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.792522   48985 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1001 19:52:49.792530   48985 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1001 19:52:49.792534   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792538   48985 command_runner.go:130] >       "size": "68420934",
	I1001 19:52:49.792544   48985 command_runner.go:130] >       "uid": {
	I1001 19:52:49.792548   48985 command_runner.go:130] >         "value": "0"
	I1001 19:52:49.792551   48985 command_runner.go:130] >       },
	I1001 19:52:49.792555   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.792559   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.792563   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.792567   48985 command_runner.go:130] >     },
	I1001 19:52:49.792571   48985 command_runner.go:130] >     {
	I1001 19:52:49.792578   48985 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1001 19:52:49.792582   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.792586   48985 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1001 19:52:49.792590   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792594   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.792603   48985 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1001 19:52:49.792612   48985 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1001 19:52:49.792617   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792621   48985 command_runner.go:130] >       "size": "742080",
	I1001 19:52:49.792624   48985 command_runner.go:130] >       "uid": {
	I1001 19:52:49.792629   48985 command_runner.go:130] >         "value": "65535"
	I1001 19:52:49.792639   48985 command_runner.go:130] >       },
	I1001 19:52:49.792643   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.792647   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.792651   48985 command_runner.go:130] >       "pinned": true
	I1001 19:52:49.792654   48985 command_runner.go:130] >     }
	I1001 19:52:49.792657   48985 command_runner.go:130] >   ]
	I1001 19:52:49.792660   48985 command_runner.go:130] > }
	I1001 19:52:49.792766   48985 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 19:52:49.792777   48985 cache_images.go:84] Images are preloaded, skipping loading
	I1001 19:52:49.792785   48985 kubeadm.go:934] updating node { 192.168.39.165 8443 v1.31.1 crio true true} ...
	I1001 19:52:49.792881   48985 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-325713 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-325713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 19:52:49.792939   48985 ssh_runner.go:195] Run: crio config
	I1001 19:52:49.826974   48985 command_runner.go:130] ! time="2024-10-01 19:52:49.808021555Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1001 19:52:49.832557   48985 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1001 19:52:49.837777   48985 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1001 19:52:49.837813   48985 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1001 19:52:49.837824   48985 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1001 19:52:49.837829   48985 command_runner.go:130] > #
	I1001 19:52:49.837838   48985 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1001 19:52:49.837847   48985 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1001 19:52:49.837859   48985 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1001 19:52:49.837875   48985 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1001 19:52:49.837884   48985 command_runner.go:130] > # reload'.
	I1001 19:52:49.837892   48985 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1001 19:52:49.837900   48985 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1001 19:52:49.837907   48985 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1001 19:52:49.837914   48985 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1001 19:52:49.837923   48985 command_runner.go:130] > [crio]
	I1001 19:52:49.837935   48985 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1001 19:52:49.837945   48985 command_runner.go:130] > # containers images, in this directory.
	I1001 19:52:49.837953   48985 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1001 19:52:49.837981   48985 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1001 19:52:49.837992   48985 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1001 19:52:49.838004   48985 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1001 19:52:49.838013   48985 command_runner.go:130] > # imagestore = ""
	I1001 19:52:49.838026   48985 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1001 19:52:49.838038   48985 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1001 19:52:49.838047   48985 command_runner.go:130] > storage_driver = "overlay"
	I1001 19:52:49.838060   48985 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1001 19:52:49.838071   48985 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1001 19:52:49.838079   48985 command_runner.go:130] > storage_option = [
	I1001 19:52:49.838086   48985 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1001 19:52:49.838090   48985 command_runner.go:130] > ]
	I1001 19:52:49.838098   48985 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1001 19:52:49.838106   48985 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1001 19:52:49.838112   48985 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1001 19:52:49.838117   48985 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1001 19:52:49.838125   48985 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1001 19:52:49.838137   48985 command_runner.go:130] > # always happen on a node reboot
	I1001 19:52:49.838147   48985 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1001 19:52:49.838168   48985 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1001 19:52:49.838180   48985 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1001 19:52:49.838191   48985 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1001 19:52:49.838199   48985 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1001 19:52:49.838213   48985 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1001 19:52:49.838232   48985 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1001 19:52:49.838241   48985 command_runner.go:130] > # internal_wipe = true
	I1001 19:52:49.838255   48985 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1001 19:52:49.838266   48985 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1001 19:52:49.838275   48985 command_runner.go:130] > # internal_repair = false
	I1001 19:52:49.838286   48985 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1001 19:52:49.838297   48985 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1001 19:52:49.838308   48985 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1001 19:52:49.838319   48985 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1001 19:52:49.838334   48985 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1001 19:52:49.838340   48985 command_runner.go:130] > [crio.api]
	I1001 19:52:49.838346   48985 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1001 19:52:49.838352   48985 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1001 19:52:49.838357   48985 command_runner.go:130] > # IP address on which the stream server will listen.
	I1001 19:52:49.838363   48985 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1001 19:52:49.838369   48985 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1001 19:52:49.838376   48985 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1001 19:52:49.838379   48985 command_runner.go:130] > # stream_port = "0"
	I1001 19:52:49.838386   48985 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1001 19:52:49.838393   48985 command_runner.go:130] > # stream_enable_tls = false
	I1001 19:52:49.838399   48985 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1001 19:52:49.838405   48985 command_runner.go:130] > # stream_idle_timeout = ""
	I1001 19:52:49.838411   48985 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1001 19:52:49.838419   48985 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1001 19:52:49.838424   48985 command_runner.go:130] > # minutes.
	I1001 19:52:49.838428   48985 command_runner.go:130] > # stream_tls_cert = ""
	I1001 19:52:49.838440   48985 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1001 19:52:49.838448   48985 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1001 19:52:49.838454   48985 command_runner.go:130] > # stream_tls_key = ""
	I1001 19:52:49.838460   48985 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1001 19:52:49.838467   48985 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1001 19:52:49.838489   48985 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1001 19:52:49.838495   48985 command_runner.go:130] > # stream_tls_ca = ""
	I1001 19:52:49.838503   48985 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1001 19:52:49.838509   48985 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1001 19:52:49.838516   48985 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1001 19:52:49.838523   48985 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1001 19:52:49.838529   48985 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1001 19:52:49.838536   48985 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1001 19:52:49.838540   48985 command_runner.go:130] > [crio.runtime]
	I1001 19:52:49.838546   48985 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1001 19:52:49.838552   48985 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1001 19:52:49.838558   48985 command_runner.go:130] > # "nofile=1024:2048"
	I1001 19:52:49.838564   48985 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1001 19:52:49.838570   48985 command_runner.go:130] > # default_ulimits = [
	I1001 19:52:49.838573   48985 command_runner.go:130] > # ]
	I1001 19:52:49.838579   48985 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1001 19:52:49.838585   48985 command_runner.go:130] > # no_pivot = false
	I1001 19:52:49.838593   48985 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1001 19:52:49.838600   48985 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1001 19:52:49.838607   48985 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1001 19:52:49.838613   48985 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1001 19:52:49.838623   48985 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1001 19:52:49.838630   48985 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1001 19:52:49.838636   48985 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1001 19:52:49.838640   48985 command_runner.go:130] > # Cgroup setting for conmon
	I1001 19:52:49.838649   48985 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1001 19:52:49.838653   48985 command_runner.go:130] > conmon_cgroup = "pod"
	I1001 19:52:49.838659   48985 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1001 19:52:49.838667   48985 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1001 19:52:49.838674   48985 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1001 19:52:49.838680   48985 command_runner.go:130] > conmon_env = [
	I1001 19:52:49.838689   48985 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1001 19:52:49.838694   48985 command_runner.go:130] > ]
	I1001 19:52:49.838699   48985 command_runner.go:130] > # Additional environment variables to set for all the
	I1001 19:52:49.838706   48985 command_runner.go:130] > # containers. These are overridden if set in the
	I1001 19:52:49.838711   48985 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1001 19:52:49.838717   48985 command_runner.go:130] > # default_env = [
	I1001 19:52:49.838720   48985 command_runner.go:130] > # ]
	I1001 19:52:49.838726   48985 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1001 19:52:49.838735   48985 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1001 19:52:49.838739   48985 command_runner.go:130] > # selinux = false
	I1001 19:52:49.838745   48985 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1001 19:52:49.838753   48985 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1001 19:52:49.838763   48985 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1001 19:52:49.838769   48985 command_runner.go:130] > # seccomp_profile = ""
	I1001 19:52:49.838774   48985 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1001 19:52:49.838782   48985 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1001 19:52:49.838787   48985 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1001 19:52:49.838794   48985 command_runner.go:130] > # which might increase security.
	I1001 19:52:49.838798   48985 command_runner.go:130] > # This option is currently deprecated,
	I1001 19:52:49.838806   48985 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1001 19:52:49.838813   48985 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1001 19:52:49.838819   48985 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1001 19:52:49.838827   48985 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1001 19:52:49.838837   48985 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1001 19:52:49.838852   48985 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1001 19:52:49.838857   48985 command_runner.go:130] > # This option supports live configuration reload.
	I1001 19:52:49.838864   48985 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1001 19:52:49.838869   48985 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1001 19:52:49.838879   48985 command_runner.go:130] > # the cgroup blockio controller.
	I1001 19:52:49.838883   48985 command_runner.go:130] > # blockio_config_file = ""
	I1001 19:52:49.838893   48985 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1001 19:52:49.838899   48985 command_runner.go:130] > # blockio parameters.
	I1001 19:52:49.838903   48985 command_runner.go:130] > # blockio_reload = false
	I1001 19:52:49.838911   48985 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1001 19:52:49.838917   48985 command_runner.go:130] > # irqbalance daemon.
	I1001 19:52:49.838922   48985 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1001 19:52:49.838930   48985 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1001 19:52:49.838937   48985 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1001 19:52:49.838945   48985 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1001 19:52:49.838953   48985 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1001 19:52:49.838961   48985 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1001 19:52:49.838968   48985 command_runner.go:130] > # This option supports live configuration reload.
	I1001 19:52:49.838972   48985 command_runner.go:130] > # rdt_config_file = ""
	I1001 19:52:49.838978   48985 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1001 19:52:49.838982   48985 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1001 19:52:49.838998   48985 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1001 19:52:49.839004   48985 command_runner.go:130] > # separate_pull_cgroup = ""
	I1001 19:52:49.839010   48985 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1001 19:52:49.839019   48985 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1001 19:52:49.839022   48985 command_runner.go:130] > # will be added.
	I1001 19:52:49.839029   48985 command_runner.go:130] > # default_capabilities = [
	I1001 19:52:49.839045   48985 command_runner.go:130] > # 	"CHOWN",
	I1001 19:52:49.839055   48985 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1001 19:52:49.839061   48985 command_runner.go:130] > # 	"FSETID",
	I1001 19:52:49.839065   48985 command_runner.go:130] > # 	"FOWNER",
	I1001 19:52:49.839070   48985 command_runner.go:130] > # 	"SETGID",
	I1001 19:52:49.839074   48985 command_runner.go:130] > # 	"SETUID",
	I1001 19:52:49.839080   48985 command_runner.go:130] > # 	"SETPCAP",
	I1001 19:52:49.839084   48985 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1001 19:52:49.839094   48985 command_runner.go:130] > # 	"KILL",
	I1001 19:52:49.839098   48985 command_runner.go:130] > # ]
	I1001 19:52:49.839107   48985 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1001 19:52:49.839113   48985 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1001 19:52:49.839125   48985 command_runner.go:130] > # add_inheritable_capabilities = false
	I1001 19:52:49.839134   48985 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1001 19:52:49.839143   48985 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1001 19:52:49.839152   48985 command_runner.go:130] > default_sysctls = [
	I1001 19:52:49.839161   48985 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1001 19:52:49.839168   48985 command_runner.go:130] > ]
	I1001 19:52:49.839177   48985 command_runner.go:130] > # List of devices on the host that a
	I1001 19:52:49.839189   48985 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1001 19:52:49.839198   48985 command_runner.go:130] > # allowed_devices = [
	I1001 19:52:49.839206   48985 command_runner.go:130] > # 	"/dev/fuse",
	I1001 19:52:49.839212   48985 command_runner.go:130] > # ]
	I1001 19:52:49.839222   48985 command_runner.go:130] > # List of additional devices. specified as
	I1001 19:52:49.839231   48985 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1001 19:52:49.839238   48985 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1001 19:52:49.839244   48985 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1001 19:52:49.839250   48985 command_runner.go:130] > # additional_devices = [
	I1001 19:52:49.839253   48985 command_runner.go:130] > # ]
	I1001 19:52:49.839260   48985 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1001 19:52:49.839264   48985 command_runner.go:130] > # cdi_spec_dirs = [
	I1001 19:52:49.839268   48985 command_runner.go:130] > # 	"/etc/cdi",
	I1001 19:52:49.839272   48985 command_runner.go:130] > # 	"/var/run/cdi",
	I1001 19:52:49.839278   48985 command_runner.go:130] > # ]
	I1001 19:52:49.839284   48985 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1001 19:52:49.839298   48985 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1001 19:52:49.839303   48985 command_runner.go:130] > # Defaults to false.
	I1001 19:52:49.839310   48985 command_runner.go:130] > # device_ownership_from_security_context = false
	I1001 19:52:49.839316   48985 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1001 19:52:49.839323   48985 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1001 19:52:49.839329   48985 command_runner.go:130] > # hooks_dir = [
	I1001 19:52:49.839334   48985 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1001 19:52:49.839339   48985 command_runner.go:130] > # ]
	I1001 19:52:49.839344   48985 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1001 19:52:49.839352   48985 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1001 19:52:49.839359   48985 command_runner.go:130] > # its default mounts from the following two files:
	I1001 19:52:49.839364   48985 command_runner.go:130] > #
	I1001 19:52:49.839370   48985 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1001 19:52:49.839378   48985 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1001 19:52:49.839386   48985 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1001 19:52:49.839389   48985 command_runner.go:130] > #
	I1001 19:52:49.839394   48985 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1001 19:52:49.839402   48985 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1001 19:52:49.839411   48985 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1001 19:52:49.839419   48985 command_runner.go:130] > #      only add mounts it finds in this file.
	I1001 19:52:49.839425   48985 command_runner.go:130] > #
	I1001 19:52:49.839429   48985 command_runner.go:130] > # default_mounts_file = ""
	I1001 19:52:49.839437   48985 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1001 19:52:49.839443   48985 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1001 19:52:49.839449   48985 command_runner.go:130] > pids_limit = 1024
	I1001 19:52:49.839456   48985 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1001 19:52:49.839463   48985 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1001 19:52:49.839472   48985 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1001 19:52:49.839479   48985 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1001 19:52:49.839485   48985 command_runner.go:130] > # log_size_max = -1
	I1001 19:52:49.839491   48985 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1001 19:52:49.839497   48985 command_runner.go:130] > # log_to_journald = false
	I1001 19:52:49.839503   48985 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1001 19:52:49.839509   48985 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1001 19:52:49.839514   48985 command_runner.go:130] > # Path to directory for container attach sockets.
	I1001 19:52:49.839520   48985 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1001 19:52:49.839531   48985 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1001 19:52:49.839535   48985 command_runner.go:130] > # bind_mount_prefix = ""
	I1001 19:52:49.839541   48985 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1001 19:52:49.839545   48985 command_runner.go:130] > # read_only = false
	I1001 19:52:49.839552   48985 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1001 19:52:49.839559   48985 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1001 19:52:49.839563   48985 command_runner.go:130] > # live configuration reload.
	I1001 19:52:49.839572   48985 command_runner.go:130] > # log_level = "info"
	I1001 19:52:49.839578   48985 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1001 19:52:49.839585   48985 command_runner.go:130] > # This option supports live configuration reload.
	I1001 19:52:49.839588   48985 command_runner.go:130] > # log_filter = ""
	I1001 19:52:49.839594   48985 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1001 19:52:49.839603   48985 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1001 19:52:49.839609   48985 command_runner.go:130] > # separated by comma.
	I1001 19:52:49.839616   48985 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1001 19:52:49.839622   48985 command_runner.go:130] > # uid_mappings = ""
	I1001 19:52:49.839627   48985 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1001 19:52:49.839635   48985 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1001 19:52:49.839647   48985 command_runner.go:130] > # separated by comma.
	I1001 19:52:49.839654   48985 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1001 19:52:49.839662   48985 command_runner.go:130] > # gid_mappings = ""
	I1001 19:52:49.839669   48985 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1001 19:52:49.839676   48985 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1001 19:52:49.839688   48985 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1001 19:52:49.839697   48985 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1001 19:52:49.839703   48985 command_runner.go:130] > # minimum_mappable_uid = -1
	I1001 19:52:49.839709   48985 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1001 19:52:49.839717   48985 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1001 19:52:49.839724   48985 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1001 19:52:49.839732   48985 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1001 19:52:49.839737   48985 command_runner.go:130] > # minimum_mappable_gid = -1
	I1001 19:52:49.839743   48985 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1001 19:52:49.839749   48985 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1001 19:52:49.839755   48985 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1001 19:52:49.839760   48985 command_runner.go:130] > # ctr_stop_timeout = 30
	I1001 19:52:49.839765   48985 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1001 19:52:49.839773   48985 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1001 19:52:49.839779   48985 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1001 19:52:49.839786   48985 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1001 19:52:49.839790   48985 command_runner.go:130] > drop_infra_ctr = false
	I1001 19:52:49.839798   48985 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1001 19:52:49.839805   48985 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1001 19:52:49.839812   48985 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1001 19:52:49.839818   48985 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1001 19:52:49.839826   48985 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1001 19:52:49.839835   48985 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1001 19:52:49.839846   48985 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1001 19:52:49.839852   48985 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1001 19:52:49.839857   48985 command_runner.go:130] > # shared_cpuset = ""
	I1001 19:52:49.839863   48985 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1001 19:52:49.839869   48985 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1001 19:52:49.839873   48985 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1001 19:52:49.839882   48985 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1001 19:52:49.839888   48985 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1001 19:52:49.839893   48985 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1001 19:52:49.839903   48985 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1001 19:52:49.839909   48985 command_runner.go:130] > # enable_criu_support = false
	I1001 19:52:49.839914   48985 command_runner.go:130] > # Enable/disable the generation of the container,
	I1001 19:52:49.839922   48985 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1001 19:52:49.839926   48985 command_runner.go:130] > # enable_pod_events = false
	I1001 19:52:49.839934   48985 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1001 19:52:49.839940   48985 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1001 19:52:49.839949   48985 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1001 19:52:49.839954   48985 command_runner.go:130] > # default_runtime = "runc"
	I1001 19:52:49.839960   48985 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1001 19:52:49.839968   48985 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1001 19:52:49.839979   48985 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1001 19:52:49.839986   48985 command_runner.go:130] > # creation as a file is not desired either.
	I1001 19:52:49.839994   48985 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1001 19:52:49.840000   48985 command_runner.go:130] > # the hostname is being managed dynamically.
	I1001 19:52:49.840005   48985 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1001 19:52:49.840010   48985 command_runner.go:130] > # ]
	I1001 19:52:49.840016   48985 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1001 19:52:49.840024   48985 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1001 19:52:49.840033   48985 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1001 19:52:49.840040   48985 command_runner.go:130] > # Each entry in the table should follow the format:
	I1001 19:52:49.840043   48985 command_runner.go:130] > #
	I1001 19:52:49.840048   48985 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1001 19:52:49.840055   48985 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1001 19:52:49.840073   48985 command_runner.go:130] > # runtime_type = "oci"
	I1001 19:52:49.840079   48985 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1001 19:52:49.840084   48985 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1001 19:52:49.840089   48985 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1001 19:52:49.840093   48985 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1001 19:52:49.840099   48985 command_runner.go:130] > # monitor_env = []
	I1001 19:52:49.840104   48985 command_runner.go:130] > # privileged_without_host_devices = false
	I1001 19:52:49.840110   48985 command_runner.go:130] > # allowed_annotations = []
	I1001 19:52:49.840115   48985 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1001 19:52:49.840121   48985 command_runner.go:130] > # Where:
	I1001 19:52:49.840126   48985 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1001 19:52:49.840134   48985 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1001 19:52:49.840143   48985 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1001 19:52:49.840155   48985 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1001 19:52:49.840167   48985 command_runner.go:130] > #   in $PATH.
	I1001 19:52:49.840179   48985 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1001 19:52:49.840190   48985 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1001 19:52:49.840202   48985 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1001 19:52:49.840211   48985 command_runner.go:130] > #   state.
	I1001 19:52:49.840223   48985 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1001 19:52:49.840235   48985 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1001 19:52:49.840244   48985 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1001 19:52:49.840249   48985 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1001 19:52:49.840255   48985 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1001 19:52:49.840264   48985 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1001 19:52:49.840278   48985 command_runner.go:130] > #   The currently recognized values are:
	I1001 19:52:49.840286   48985 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1001 19:52:49.840294   48985 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1001 19:52:49.840301   48985 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1001 19:52:49.840309   48985 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1001 19:52:49.840318   48985 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1001 19:52:49.840325   48985 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1001 19:52:49.840334   48985 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1001 19:52:49.840342   48985 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1001 19:52:49.840349   48985 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1001 19:52:49.840367   48985 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1001 19:52:49.840377   48985 command_runner.go:130] > #   deprecated option "conmon".
	I1001 19:52:49.840388   48985 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1001 19:52:49.840397   48985 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1001 19:52:49.840406   48985 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1001 19:52:49.840411   48985 command_runner.go:130] > #   should be moved to the container's cgroup
	I1001 19:52:49.840417   48985 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1001 19:52:49.840425   48985 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1001 19:52:49.840431   48985 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1001 19:52:49.840439   48985 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1001 19:52:49.840443   48985 command_runner.go:130] > #
	I1001 19:52:49.840449   48985 command_runner.go:130] > # Using the seccomp notifier feature:
	I1001 19:52:49.840456   48985 command_runner.go:130] > #
	I1001 19:52:49.840464   48985 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1001 19:52:49.840472   48985 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1001 19:52:49.840478   48985 command_runner.go:130] > #
	I1001 19:52:49.840483   48985 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1001 19:52:49.840496   48985 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1001 19:52:49.840501   48985 command_runner.go:130] > #
	I1001 19:52:49.840506   48985 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1001 19:52:49.840512   48985 command_runner.go:130] > # feature.
	I1001 19:52:49.840516   48985 command_runner.go:130] > #
	I1001 19:52:49.840524   48985 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1001 19:52:49.840529   48985 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1001 19:52:49.840537   48985 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1001 19:52:49.840545   48985 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1001 19:52:49.840553   48985 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1001 19:52:49.840558   48985 command_runner.go:130] > #
	I1001 19:52:49.840563   48985 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1001 19:52:49.840571   48985 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1001 19:52:49.840576   48985 command_runner.go:130] > #
	I1001 19:52:49.840582   48985 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1001 19:52:49.840589   48985 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1001 19:52:49.840592   48985 command_runner.go:130] > #
	I1001 19:52:49.840598   48985 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1001 19:52:49.840606   48985 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1001 19:52:49.840609   48985 command_runner.go:130] > # limitation.
	I1001 19:52:49.840616   48985 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1001 19:52:49.840623   48985 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1001 19:52:49.840626   48985 command_runner.go:130] > runtime_type = "oci"
	I1001 19:52:49.840632   48985 command_runner.go:130] > runtime_root = "/run/runc"
	I1001 19:52:49.840636   48985 command_runner.go:130] > runtime_config_path = ""
	I1001 19:52:49.840643   48985 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1001 19:52:49.840647   48985 command_runner.go:130] > monitor_cgroup = "pod"
	I1001 19:52:49.840653   48985 command_runner.go:130] > monitor_exec_cgroup = ""
	I1001 19:52:49.840657   48985 command_runner.go:130] > monitor_env = [
	I1001 19:52:49.840664   48985 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1001 19:52:49.840668   48985 command_runner.go:130] > ]
	I1001 19:52:49.840673   48985 command_runner.go:130] > privileged_without_host_devices = false
	I1001 19:52:49.840684   48985 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1001 19:52:49.840691   48985 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1001 19:52:49.840697   48985 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1001 19:52:49.840706   48985 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1001 19:52:49.840718   48985 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1001 19:52:49.840726   48985 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1001 19:52:49.840736   48985 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1001 19:52:49.840747   48985 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1001 19:52:49.840755   48985 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1001 19:52:49.840765   48985 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1001 19:52:49.840771   48985 command_runner.go:130] > # Example:
	I1001 19:52:49.840776   48985 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1001 19:52:49.840783   48985 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1001 19:52:49.840787   48985 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1001 19:52:49.840794   48985 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1001 19:52:49.840797   48985 command_runner.go:130] > # cpuset = 0
	I1001 19:52:49.840803   48985 command_runner.go:130] > # cpushares = "0-1"
	I1001 19:52:49.840806   48985 command_runner.go:130] > # Where:
	I1001 19:52:49.840813   48985 command_runner.go:130] > # The workload name is workload-type.
	I1001 19:52:49.840820   48985 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1001 19:52:49.840826   48985 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1001 19:52:49.840832   48985 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1001 19:52:49.840840   48985 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1001 19:52:49.840847   48985 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1001 19:52:49.840854   48985 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1001 19:52:49.840861   48985 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1001 19:52:49.840867   48985 command_runner.go:130] > # Default value is set to true
	I1001 19:52:49.840872   48985 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1001 19:52:49.840879   48985 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1001 19:52:49.840886   48985 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1001 19:52:49.840890   48985 command_runner.go:130] > # Default value is set to 'false'
	I1001 19:52:49.840896   48985 command_runner.go:130] > # disable_hostport_mapping = false
	I1001 19:52:49.840905   48985 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1001 19:52:49.840909   48985 command_runner.go:130] > #
	I1001 19:52:49.840914   48985 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1001 19:52:49.840920   48985 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1001 19:52:49.840925   48985 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1001 19:52:49.840930   48985 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1001 19:52:49.840937   48985 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1001 19:52:49.840940   48985 command_runner.go:130] > [crio.image]
	I1001 19:52:49.840945   48985 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1001 19:52:49.840949   48985 command_runner.go:130] > # default_transport = "docker://"
	I1001 19:52:49.840956   48985 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1001 19:52:49.840961   48985 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1001 19:52:49.840965   48985 command_runner.go:130] > # global_auth_file = ""
	I1001 19:52:49.840970   48985 command_runner.go:130] > # The image used to instantiate infra containers.
	I1001 19:52:49.840974   48985 command_runner.go:130] > # This option supports live configuration reload.
	I1001 19:52:49.840978   48985 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1001 19:52:49.840984   48985 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1001 19:52:49.840989   48985 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1001 19:52:49.840994   48985 command_runner.go:130] > # This option supports live configuration reload.
	I1001 19:52:49.840998   48985 command_runner.go:130] > # pause_image_auth_file = ""
	I1001 19:52:49.841003   48985 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1001 19:52:49.841009   48985 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1001 19:52:49.841014   48985 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1001 19:52:49.841019   48985 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1001 19:52:49.841022   48985 command_runner.go:130] > # pause_command = "/pause"
	I1001 19:52:49.841028   48985 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1001 19:52:49.841033   48985 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1001 19:52:49.841038   48985 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1001 19:52:49.841045   48985 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1001 19:52:49.841050   48985 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1001 19:52:49.841056   48985 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1001 19:52:49.841059   48985 command_runner.go:130] > # pinned_images = [
	I1001 19:52:49.841063   48985 command_runner.go:130] > # ]
	I1001 19:52:49.841070   48985 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1001 19:52:49.841076   48985 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1001 19:52:49.841085   48985 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1001 19:52:49.841093   48985 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1001 19:52:49.841097   48985 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1001 19:52:49.841104   48985 command_runner.go:130] > # signature_policy = ""
	I1001 19:52:49.841109   48985 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1001 19:52:49.841118   48985 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1001 19:52:49.841127   48985 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1001 19:52:49.841139   48985 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1001 19:52:49.841152   48985 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1001 19:52:49.841162   48985 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1001 19:52:49.841174   48985 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1001 19:52:49.841186   48985 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1001 19:52:49.841195   48985 command_runner.go:130] > # changing them here.
	I1001 19:52:49.841201   48985 command_runner.go:130] > # insecure_registries = [
	I1001 19:52:49.841209   48985 command_runner.go:130] > # ]
	I1001 19:52:49.841217   48985 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1001 19:52:49.841227   48985 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1001 19:52:49.841235   48985 command_runner.go:130] > # image_volumes = "mkdir"
	I1001 19:52:49.841247   48985 command_runner.go:130] > # Temporary directory to use for storing big files
	I1001 19:52:49.841254   48985 command_runner.go:130] > # big_files_temporary_dir = ""
	I1001 19:52:49.841260   48985 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1001 19:52:49.841266   48985 command_runner.go:130] > # CNI plugins.
	I1001 19:52:49.841270   48985 command_runner.go:130] > [crio.network]
	I1001 19:52:49.841278   48985 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1001 19:52:49.841285   48985 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1001 19:52:49.841289   48985 command_runner.go:130] > # cni_default_network = ""
	I1001 19:52:49.841297   48985 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1001 19:52:49.841303   48985 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1001 19:52:49.841308   48985 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1001 19:52:49.841314   48985 command_runner.go:130] > # plugin_dirs = [
	I1001 19:52:49.841318   48985 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1001 19:52:49.841324   48985 command_runner.go:130] > # ]
	I1001 19:52:49.841331   48985 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1001 19:52:49.841336   48985 command_runner.go:130] > [crio.metrics]
	I1001 19:52:49.841341   48985 command_runner.go:130] > # Globally enable or disable metrics support.
	I1001 19:52:49.841347   48985 command_runner.go:130] > enable_metrics = true
	I1001 19:52:49.841352   48985 command_runner.go:130] > # Specify enabled metrics collectors.
	I1001 19:52:49.841358   48985 command_runner.go:130] > # Per default all metrics are enabled.
	I1001 19:52:49.841364   48985 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1001 19:52:49.841372   48985 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1001 19:52:49.841380   48985 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1001 19:52:49.841387   48985 command_runner.go:130] > # metrics_collectors = [
	I1001 19:52:49.841391   48985 command_runner.go:130] > # 	"operations",
	I1001 19:52:49.841397   48985 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1001 19:52:49.841402   48985 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1001 19:52:49.841408   48985 command_runner.go:130] > # 	"operations_errors",
	I1001 19:52:49.841412   48985 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1001 19:52:49.841418   48985 command_runner.go:130] > # 	"image_pulls_by_name",
	I1001 19:52:49.841423   48985 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1001 19:52:49.841432   48985 command_runner.go:130] > # 	"image_pulls_failures",
	I1001 19:52:49.841439   48985 command_runner.go:130] > # 	"image_pulls_successes",
	I1001 19:52:49.841443   48985 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1001 19:52:49.841447   48985 command_runner.go:130] > # 	"image_layer_reuse",
	I1001 19:52:49.841454   48985 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1001 19:52:49.841458   48985 command_runner.go:130] > # 	"containers_oom_total",
	I1001 19:52:49.841462   48985 command_runner.go:130] > # 	"containers_oom",
	I1001 19:52:49.841467   48985 command_runner.go:130] > # 	"processes_defunct",
	I1001 19:52:49.841471   48985 command_runner.go:130] > # 	"operations_total",
	I1001 19:52:49.841477   48985 command_runner.go:130] > # 	"operations_latency_seconds",
	I1001 19:52:49.841481   48985 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1001 19:52:49.841487   48985 command_runner.go:130] > # 	"operations_errors_total",
	I1001 19:52:49.841491   48985 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1001 19:52:49.841497   48985 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1001 19:52:49.841501   48985 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1001 19:52:49.841507   48985 command_runner.go:130] > # 	"image_pulls_success_total",
	I1001 19:52:49.841512   48985 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1001 19:52:49.841518   48985 command_runner.go:130] > # 	"containers_oom_count_total",
	I1001 19:52:49.841522   48985 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1001 19:52:49.841528   48985 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1001 19:52:49.841531   48985 command_runner.go:130] > # ]
	I1001 19:52:49.841536   48985 command_runner.go:130] > # The port on which the metrics server will listen.
	I1001 19:52:49.841542   48985 command_runner.go:130] > # metrics_port = 9090
	I1001 19:52:49.841547   48985 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1001 19:52:49.841553   48985 command_runner.go:130] > # metrics_socket = ""
	I1001 19:52:49.841558   48985 command_runner.go:130] > # The certificate for the secure metrics server.
	I1001 19:52:49.841566   48985 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1001 19:52:49.841572   48985 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1001 19:52:49.841579   48985 command_runner.go:130] > # certificate on any modification event.
	I1001 19:52:49.841583   48985 command_runner.go:130] > # metrics_cert = ""
	I1001 19:52:49.841591   48985 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1001 19:52:49.841597   48985 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1001 19:52:49.841601   48985 command_runner.go:130] > # metrics_key = ""
	I1001 19:52:49.841608   48985 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1001 19:52:49.841612   48985 command_runner.go:130] > [crio.tracing]
	I1001 19:52:49.841617   48985 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1001 19:52:49.841623   48985 command_runner.go:130] > # enable_tracing = false
	I1001 19:52:49.841628   48985 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1001 19:52:49.841637   48985 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1001 19:52:49.841644   48985 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1001 19:52:49.841650   48985 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1001 19:52:49.841654   48985 command_runner.go:130] > # CRI-O NRI configuration.
	I1001 19:52:49.841657   48985 command_runner.go:130] > [crio.nri]
	I1001 19:52:49.841664   48985 command_runner.go:130] > # Globally enable or disable NRI.
	I1001 19:52:49.841668   48985 command_runner.go:130] > # enable_nri = false
	I1001 19:52:49.841676   48985 command_runner.go:130] > # NRI socket to listen on.
	I1001 19:52:49.841685   48985 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1001 19:52:49.841691   48985 command_runner.go:130] > # NRI plugin directory to use.
	I1001 19:52:49.841696   48985 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1001 19:52:49.841701   48985 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1001 19:52:49.841712   48985 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1001 19:52:49.841717   48985 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1001 19:52:49.841723   48985 command_runner.go:130] > # nri_disable_connections = false
	I1001 19:52:49.841728   48985 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1001 19:52:49.841742   48985 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1001 19:52:49.841747   48985 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1001 19:52:49.841754   48985 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1001 19:52:49.841759   48985 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1001 19:52:49.841765   48985 command_runner.go:130] > [crio.stats]
	I1001 19:52:49.841771   48985 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1001 19:52:49.841778   48985 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1001 19:52:49.841782   48985 command_runner.go:130] > # stats_collection_period = 0
	I1001 19:52:49.841892   48985 cni.go:84] Creating CNI manager for ""
	I1001 19:52:49.841906   48985 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1001 19:52:49.841914   48985 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 19:52:49.841941   48985 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.165 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-325713 NodeName:multinode-325713 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 19:52:49.842103   48985 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-325713"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 19:52:49.842168   48985 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 19:52:49.851776   48985 command_runner.go:130] > kubeadm
	I1001 19:52:49.851794   48985 command_runner.go:130] > kubectl
	I1001 19:52:49.851800   48985 command_runner.go:130] > kubelet
	I1001 19:52:49.851826   48985 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 19:52:49.851883   48985 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 19:52:49.860589   48985 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1001 19:52:49.876591   48985 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 19:52:49.891917   48985 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1001 19:52:49.907137   48985 ssh_runner.go:195] Run: grep 192.168.39.165	control-plane.minikube.internal$ /etc/hosts
	I1001 19:52:49.910877   48985 command_runner.go:130] > 192.168.39.165	control-plane.minikube.internal
	I1001 19:52:49.911001   48985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:52:50.042930   48985 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:52:50.056659   48985 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713 for IP: 192.168.39.165
	I1001 19:52:50.056696   48985 certs.go:194] generating shared ca certs ...
	I1001 19:52:50.056713   48985 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:52:50.056880   48985 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 19:52:50.056924   48985 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 19:52:50.056938   48985 certs.go:256] generating profile certs ...
	I1001 19:52:50.057020   48985 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/client.key
	I1001 19:52:50.057090   48985 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/apiserver.key.93594a76
	I1001 19:52:50.057131   48985 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/proxy-client.key
	I1001 19:52:50.057142   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 19:52:50.057159   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 19:52:50.057174   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 19:52:50.057187   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 19:52:50.057200   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 19:52:50.057214   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 19:52:50.057230   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 19:52:50.057244   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 19:52:50.057297   48985 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 19:52:50.057331   48985 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 19:52:50.057346   48985 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 19:52:50.057375   48985 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 19:52:50.057410   48985 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 19:52:50.057437   48985 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 19:52:50.057481   48985 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:52:50.057513   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /usr/share/ca-certificates/184302.pem
	I1001 19:52:50.057530   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:52:50.057546   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem -> /usr/share/ca-certificates/18430.pem
	I1001 19:52:50.058101   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 19:52:50.082711   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 19:52:50.106754   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 19:52:50.129583   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 19:52:50.154005   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1001 19:52:50.178885   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 19:52:50.202452   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 19:52:50.226496   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1001 19:52:50.250879   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 19:52:50.274976   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 19:52:50.299353   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 19:52:50.323050   48985 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 19:52:50.339273   48985 ssh_runner.go:195] Run: openssl version
	I1001 19:52:50.345153   48985 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1001 19:52:50.345226   48985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 19:52:50.355918   48985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 19:52:50.360545   48985 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 19:52:50.360619   48985 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 19:52:50.360680   48985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 19:52:50.366369   48985 command_runner.go:130] > 3ec20f2e
	I1001 19:52:50.366454   48985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 19:52:50.375538   48985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 19:52:50.385776   48985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:52:50.390018   48985 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:52:50.390042   48985 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:52:50.390076   48985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:52:50.395646   48985 command_runner.go:130] > b5213941
	I1001 19:52:50.395755   48985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 19:52:50.405126   48985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 19:52:50.415435   48985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 19:52:50.419669   48985 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 19:52:50.419690   48985 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 19:52:50.419727   48985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 19:52:50.424835   48985 command_runner.go:130] > 51391683
	I1001 19:52:50.425013   48985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 19:52:50.433789   48985 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 19:52:50.437744   48985 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 19:52:50.437773   48985 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1001 19:52:50.437780   48985 command_runner.go:130] > Device: 253,1	Inode: 9431080     Links: 1
	I1001 19:52:50.437786   48985 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1001 19:52:50.437796   48985 command_runner.go:130] > Access: 2024-10-01 19:45:59.439804893 +0000
	I1001 19:52:50.437804   48985 command_runner.go:130] > Modify: 2024-10-01 19:45:59.439804893 +0000
	I1001 19:52:50.437811   48985 command_runner.go:130] > Change: 2024-10-01 19:45:59.439804893 +0000
	I1001 19:52:50.437819   48985 command_runner.go:130] >  Birth: 2024-10-01 19:45:59.439804893 +0000
	I1001 19:52:50.437890   48985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 19:52:50.443078   48985 command_runner.go:130] > Certificate will not expire
	I1001 19:52:50.443147   48985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 19:52:50.448249   48985 command_runner.go:130] > Certificate will not expire
	I1001 19:52:50.448322   48985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 19:52:50.453288   48985 command_runner.go:130] > Certificate will not expire
	I1001 19:52:50.453485   48985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 19:52:50.458554   48985 command_runner.go:130] > Certificate will not expire
	I1001 19:52:50.458609   48985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 19:52:50.463627   48985 command_runner.go:130] > Certificate will not expire
	I1001 19:52:50.463726   48985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 19:52:50.469018   48985 command_runner.go:130] > Certificate will not expire
	I1001 19:52:50.469102   48985 kubeadm.go:392] StartCluster: {Name:multinode-325713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:multinode-325713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.61 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:52:50.469258   48985 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 19:52:50.469326   48985 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 19:52:50.504493   48985 command_runner.go:130] > 50abdc221179748e0e77361468d2966268a00dbb73094bf062c3fc4dd4abab85
	I1001 19:52:50.504525   48985 command_runner.go:130] > e73d14dc9d500ff21c9c06b2a020bc93776770a604b0f75cc50773c705cd6ff0
	I1001 19:52:50.504535   48985 command_runner.go:130] > 74cc8c8d45eb8e370db6e5a01ea691e69597f620ba5e51937ab8a6c8a4efb2ab
	I1001 19:52:50.504545   48985 command_runner.go:130] > c753d689839b591bdbd1e4b857d4fc8207b7f30ad75e876545e5c20f6aea9c17
	I1001 19:52:50.504554   48985 command_runner.go:130] > 99e0c7308d4811ed9a37c1503f0c4f652731732c727fddb6b2ca91a0d1d1207a
	I1001 19:52:50.504562   48985 command_runner.go:130] > b5825a9ff6472b047a265e5afd5811414044723e46816206e4beb6b418b69e24
	I1001 19:52:50.504567   48985 command_runner.go:130] > a87badf95fa60dbc3077c37ef50ce351665296ba8906532d97ef8dc1f7e01639
	I1001 19:52:50.504574   48985 command_runner.go:130] > 19d51ef666dc5924ed51fc7a56fd063604dfa6c7c7ed8f6f1edf773e2ddf803a
	I1001 19:52:50.504594   48985 cri.go:89] found id: "50abdc221179748e0e77361468d2966268a00dbb73094bf062c3fc4dd4abab85"
	I1001 19:52:50.504602   48985 cri.go:89] found id: "e73d14dc9d500ff21c9c06b2a020bc93776770a604b0f75cc50773c705cd6ff0"
	I1001 19:52:50.504605   48985 cri.go:89] found id: "74cc8c8d45eb8e370db6e5a01ea691e69597f620ba5e51937ab8a6c8a4efb2ab"
	I1001 19:52:50.504610   48985 cri.go:89] found id: "c753d689839b591bdbd1e4b857d4fc8207b7f30ad75e876545e5c20f6aea9c17"
	I1001 19:52:50.504613   48985 cri.go:89] found id: "99e0c7308d4811ed9a37c1503f0c4f652731732c727fddb6b2ca91a0d1d1207a"
	I1001 19:52:50.504618   48985 cri.go:89] found id: "b5825a9ff6472b047a265e5afd5811414044723e46816206e4beb6b418b69e24"
	I1001 19:52:50.504621   48985 cri.go:89] found id: "a87badf95fa60dbc3077c37ef50ce351665296ba8906532d97ef8dc1f7e01639"
	I1001 19:52:50.504624   48985 cri.go:89] found id: "19d51ef666dc5924ed51fc7a56fd063604dfa6c7c7ed8f6f1edf773e2ddf803a"
	I1001 19:52:50.504626   48985 cri.go:89] found id: ""
	I1001 19:52:50.504667   48985 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.032010464Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812476031989022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5fdcf07c-7897-468d-88cd-20b6e52bb1cd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.032624060Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c5d301a-95f4-488e-b0c8-9fccb2cf72fd name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.032689802Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c5d301a-95f4-488e-b0c8-9fccb2cf72fd name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.033059395Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b834b2eb85399ce2fb868e88427aa76dca10dbdd8cbbaa50408427c4924cfc2,PodSandboxId:8d512f727350db7e42fb355890131b9202b3b5ac2f7cf97bb0ac0897743a2887,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727812411494846624,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nhjc5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10d9056-2747-4c16-be7f-478f969d5d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70eda3c33aa85b6d005a8045ca58d14983670f13a2f0a3403770d9d77d1eaf5,PodSandboxId:f6df19e7ef815784870ba6cfaa2a215f639a34f4ba4aa828afe952fa36f201ae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727812377889364661,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7kvjb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af66262-e3a8-4687-bf10-f7e139689769,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8622827e0bea2327d91748e496fa5a35b91539cac3bc3d17318689ecbf817385,PodSandboxId:7a04724b1fa98a29c2c22ae184bff58fbe3f0d94fd27dc3b3789b7be5c370477,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727812377961693569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swx5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa4c293-22e5-4a77-b851-3ae28e745a58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e133d880f95931cecea791c307dbd3b63126f009258e015eab481ab1ffde4c7a,PodSandboxId:81b1c36d12ae4bd3bf4d4982f3599008911582918553c0a746f84be09c849dc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727812377886814724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqznz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4551ba7-af95-4699-a104-1e0b0c3c965b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f406b30d4c63e456cd07ab7d97da4bf2e332b36fcc54315320d56f51c5399c,PodSandboxId:b10670eed02ee460bbe023ab5779a9f0a7aed0572e68bca0fec3438878b0a36e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727812377839104762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279c7942-da40-453f-b94f-2795b1c84c5d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8cca9641eaeaec6d2442941fde94017ec30b1be1fa78944aa88014145d48b14,PodSandboxId:d177cd495f846e32744ad856b6fc7972ac9d9a2642ad5545b96f957ef7b1f3ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727812372965848782,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b4c161593a5efe7a8c831c698c0fa1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91728fb10efec288907940bbd25068fbc59d3bf362903c3ee875cb425a7e9570,PodSandboxId:b9ca6a212d89c8fee5b39731beaa923e3c583d0537768e387c542d6f17a7845c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727812372994973249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054fcb6311ec0c9eed872f8380f10fef,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb81c99fa1c2e42286c70d520a49c2ad2ec6c2c8c728399216de29090137bf71,PodSandboxId:7590ee8b5f847ce6d77d4d8d1ae22ae7e1de9601c8c50fdce24c675a9303bffe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727812372902988561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14066c00a6e2b917f1e44bc7ec721b3b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:454294743e585aefdb165075a88ffc9546bf7dac940543d3064fa8252cfe5b64,PodSandboxId:e1b7e71701a6300d0368715b3ead5c3d45d5044a83e61dcf39e592d182fc1042,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727812372884212491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bf22a4e81d08ba90ba65b23ac7659,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36dd8c1d835614188d453bde713aaf3ead777013b290b859b9ef1cf875c1b685,PodSandboxId:0108e2f859c4b9e450abbb0dc80b3ea050d18785ce021d693ab87b230b013c18,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727812043159380149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nhjc5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10d9056-2747-4c16-be7f-478f969d5d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50abdc221179748e0e77361468d2966268a00dbb73094bf062c3fc4dd4abab85,PodSandboxId:9013eb36b71b5e1fe146ed5c7cacfd3d5fa4aac2a0073e7c062d23327122e28e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727811987017516787,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swx5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa4c293-22e5-4a77-b851-3ae28e745a58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73d14dc9d500ff21c9c06b2a020bc93776770a604b0f75cc50773c705cd6ff0,PodSandboxId:b9b43bf6e515ac84d92762f64983b1829820bf2bd6a095077cc936f208c9d88f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727811986960924497,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 279c7942-da40-453f-b94f-2795b1c84c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74cc8c8d45eb8e370db6e5a01ea691e69597f620ba5e51937ab8a6c8a4efb2ab,PodSandboxId:7f69035cb9fd7cd59575e995ecaf53d33d5b0cc28348f2994cc6d8258bbe1a39,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727811975043730765,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7kvjb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 3af66262-e3a8-4687-bf10-f7e139689769,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c753d689839b591bdbd1e4b857d4fc8207b7f30ad75e876545e5c20f6aea9c17,PodSandboxId:12f30e785442ae580bcbeff933862b69e54008da91a2c67f36e2a9d0c48d8e72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727811974820289353,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqznz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4551ba7-af95-4699-a104
-1e0b0c3c965b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99e0c7308d4811ed9a37c1503f0c4f652731732c727fddb6b2ca91a0d1d1207a,PodSandboxId:4f3f4a90ff8fba31bc0128beb7941fee07256d96ae2bb46791196434f6cc2a35,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727811963393478784,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054fcb6
311ec0c9eed872f8380f10fef,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a87badf95fa60dbc3077c37ef50ce351665296ba8906532d97ef8dc1f7e01639,PodSandboxId:c8353829f738e155c8a3fd6c5b006eca9c86c3471c91bda191a91b08ce182339,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727811963386369788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b4c161593a5efe7a8c831c698c0fa1,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5825a9ff6472b047a265e5afd5811414044723e46816206e4beb6b418b69e24,PodSandboxId:5790e9d0912b0802ba052d340247180ecd19df92be2648f40ae71124e5e27d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727811963393006733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14066c00a6e2b917f1e44bc7ec721b3b,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19d51ef666dc5924ed51fc7a56fd063604dfa6c7c7ed8f6f1edf773e2ddf803a,PodSandboxId:d1410a265d7f54ed665f318621cb9f3ed483ad896ddb5139c6fc994458d41b4a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727811963264479921,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bf22a4e81d08ba90ba65b23ac7659,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c5d301a-95f4-488e-b0c8-9fccb2cf72fd name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.072784231Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b34d0bb6-278d-4f72-8789-e9086bdbde7a name=/runtime.v1.RuntimeService/Version
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.072876769Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b34d0bb6-278d-4f72-8789-e9086bdbde7a name=/runtime.v1.RuntimeService/Version
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.073946486Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1f3320b7-14f0-480c-b889-aa41f2aed73d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.074345624Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812476074322106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f3320b7-14f0-480c-b889-aa41f2aed73d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.074854528Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dff47fda-7f25-4b01-a8ea-d18549f61a06 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.074924822Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dff47fda-7f25-4b01-a8ea-d18549f61a06 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.075253258Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b834b2eb85399ce2fb868e88427aa76dca10dbdd8cbbaa50408427c4924cfc2,PodSandboxId:8d512f727350db7e42fb355890131b9202b3b5ac2f7cf97bb0ac0897743a2887,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727812411494846624,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nhjc5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10d9056-2747-4c16-be7f-478f969d5d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70eda3c33aa85b6d005a8045ca58d14983670f13a2f0a3403770d9d77d1eaf5,PodSandboxId:f6df19e7ef815784870ba6cfaa2a215f639a34f4ba4aa828afe952fa36f201ae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727812377889364661,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7kvjb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af66262-e3a8-4687-bf10-f7e139689769,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8622827e0bea2327d91748e496fa5a35b91539cac3bc3d17318689ecbf817385,PodSandboxId:7a04724b1fa98a29c2c22ae184bff58fbe3f0d94fd27dc3b3789b7be5c370477,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727812377961693569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swx5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa4c293-22e5-4a77-b851-3ae28e745a58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e133d880f95931cecea791c307dbd3b63126f009258e015eab481ab1ffde4c7a,PodSandboxId:81b1c36d12ae4bd3bf4d4982f3599008911582918553c0a746f84be09c849dc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727812377886814724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqznz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4551ba7-af95-4699-a104-1e0b0c3c965b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f406b30d4c63e456cd07ab7d97da4bf2e332b36fcc54315320d56f51c5399c,PodSandboxId:b10670eed02ee460bbe023ab5779a9f0a7aed0572e68bca0fec3438878b0a36e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727812377839104762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279c7942-da40-453f-b94f-2795b1c84c5d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8cca9641eaeaec6d2442941fde94017ec30b1be1fa78944aa88014145d48b14,PodSandboxId:d177cd495f846e32744ad856b6fc7972ac9d9a2642ad5545b96f957ef7b1f3ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727812372965848782,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b4c161593a5efe7a8c831c698c0fa1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91728fb10efec288907940bbd25068fbc59d3bf362903c3ee875cb425a7e9570,PodSandboxId:b9ca6a212d89c8fee5b39731beaa923e3c583d0537768e387c542d6f17a7845c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727812372994973249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054fcb6311ec0c9eed872f8380f10fef,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb81c99fa1c2e42286c70d520a49c2ad2ec6c2c8c728399216de29090137bf71,PodSandboxId:7590ee8b5f847ce6d77d4d8d1ae22ae7e1de9601c8c50fdce24c675a9303bffe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727812372902988561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14066c00a6e2b917f1e44bc7ec721b3b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:454294743e585aefdb165075a88ffc9546bf7dac940543d3064fa8252cfe5b64,PodSandboxId:e1b7e71701a6300d0368715b3ead5c3d45d5044a83e61dcf39e592d182fc1042,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727812372884212491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bf22a4e81d08ba90ba65b23ac7659,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36dd8c1d835614188d453bde713aaf3ead777013b290b859b9ef1cf875c1b685,PodSandboxId:0108e2f859c4b9e450abbb0dc80b3ea050d18785ce021d693ab87b230b013c18,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727812043159380149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nhjc5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10d9056-2747-4c16-be7f-478f969d5d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50abdc221179748e0e77361468d2966268a00dbb73094bf062c3fc4dd4abab85,PodSandboxId:9013eb36b71b5e1fe146ed5c7cacfd3d5fa4aac2a0073e7c062d23327122e28e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727811987017516787,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swx5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa4c293-22e5-4a77-b851-3ae28e745a58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73d14dc9d500ff21c9c06b2a020bc93776770a604b0f75cc50773c705cd6ff0,PodSandboxId:b9b43bf6e515ac84d92762f64983b1829820bf2bd6a095077cc936f208c9d88f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727811986960924497,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 279c7942-da40-453f-b94f-2795b1c84c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74cc8c8d45eb8e370db6e5a01ea691e69597f620ba5e51937ab8a6c8a4efb2ab,PodSandboxId:7f69035cb9fd7cd59575e995ecaf53d33d5b0cc28348f2994cc6d8258bbe1a39,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727811975043730765,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7kvjb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 3af66262-e3a8-4687-bf10-f7e139689769,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c753d689839b591bdbd1e4b857d4fc8207b7f30ad75e876545e5c20f6aea9c17,PodSandboxId:12f30e785442ae580bcbeff933862b69e54008da91a2c67f36e2a9d0c48d8e72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727811974820289353,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqznz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4551ba7-af95-4699-a104
-1e0b0c3c965b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99e0c7308d4811ed9a37c1503f0c4f652731732c727fddb6b2ca91a0d1d1207a,PodSandboxId:4f3f4a90ff8fba31bc0128beb7941fee07256d96ae2bb46791196434f6cc2a35,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727811963393478784,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054fcb6
311ec0c9eed872f8380f10fef,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a87badf95fa60dbc3077c37ef50ce351665296ba8906532d97ef8dc1f7e01639,PodSandboxId:c8353829f738e155c8a3fd6c5b006eca9c86c3471c91bda191a91b08ce182339,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727811963386369788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b4c161593a5efe7a8c831c698c0fa1,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5825a9ff6472b047a265e5afd5811414044723e46816206e4beb6b418b69e24,PodSandboxId:5790e9d0912b0802ba052d340247180ecd19df92be2648f40ae71124e5e27d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727811963393006733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14066c00a6e2b917f1e44bc7ec721b3b,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19d51ef666dc5924ed51fc7a56fd063604dfa6c7c7ed8f6f1edf773e2ddf803a,PodSandboxId:d1410a265d7f54ed665f318621cb9f3ed483ad896ddb5139c6fc994458d41b4a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727811963264479921,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bf22a4e81d08ba90ba65b23ac7659,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dff47fda-7f25-4b01-a8ea-d18549f61a06 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.114822771Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0a92288e-0ade-4752-b9e5-b200d9908396 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.114924750Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0a92288e-0ade-4752-b9e5-b200d9908396 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.116080709Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8ada5f7-d16f-49a0-85f1-ae6c82cb0cde name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.116610471Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812476116490245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8ada5f7-d16f-49a0-85f1-ae6c82cb0cde name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.117160185Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84ec6b0d-3814-4934-8114-b965b2024149 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.117228237Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84ec6b0d-3814-4934-8114-b965b2024149 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.117548351Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b834b2eb85399ce2fb868e88427aa76dca10dbdd8cbbaa50408427c4924cfc2,PodSandboxId:8d512f727350db7e42fb355890131b9202b3b5ac2f7cf97bb0ac0897743a2887,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727812411494846624,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nhjc5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10d9056-2747-4c16-be7f-478f969d5d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70eda3c33aa85b6d005a8045ca58d14983670f13a2f0a3403770d9d77d1eaf5,PodSandboxId:f6df19e7ef815784870ba6cfaa2a215f639a34f4ba4aa828afe952fa36f201ae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727812377889364661,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7kvjb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af66262-e3a8-4687-bf10-f7e139689769,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8622827e0bea2327d91748e496fa5a35b91539cac3bc3d17318689ecbf817385,PodSandboxId:7a04724b1fa98a29c2c22ae184bff58fbe3f0d94fd27dc3b3789b7be5c370477,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727812377961693569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swx5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa4c293-22e5-4a77-b851-3ae28e745a58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e133d880f95931cecea791c307dbd3b63126f009258e015eab481ab1ffde4c7a,PodSandboxId:81b1c36d12ae4bd3bf4d4982f3599008911582918553c0a746f84be09c849dc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727812377886814724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqznz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4551ba7-af95-4699-a104-1e0b0c3c965b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f406b30d4c63e456cd07ab7d97da4bf2e332b36fcc54315320d56f51c5399c,PodSandboxId:b10670eed02ee460bbe023ab5779a9f0a7aed0572e68bca0fec3438878b0a36e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727812377839104762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279c7942-da40-453f-b94f-2795b1c84c5d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8cca9641eaeaec6d2442941fde94017ec30b1be1fa78944aa88014145d48b14,PodSandboxId:d177cd495f846e32744ad856b6fc7972ac9d9a2642ad5545b96f957ef7b1f3ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727812372965848782,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b4c161593a5efe7a8c831c698c0fa1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91728fb10efec288907940bbd25068fbc59d3bf362903c3ee875cb425a7e9570,PodSandboxId:b9ca6a212d89c8fee5b39731beaa923e3c583d0537768e387c542d6f17a7845c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727812372994973249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054fcb6311ec0c9eed872f8380f10fef,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb81c99fa1c2e42286c70d520a49c2ad2ec6c2c8c728399216de29090137bf71,PodSandboxId:7590ee8b5f847ce6d77d4d8d1ae22ae7e1de9601c8c50fdce24c675a9303bffe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727812372902988561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14066c00a6e2b917f1e44bc7ec721b3b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:454294743e585aefdb165075a88ffc9546bf7dac940543d3064fa8252cfe5b64,PodSandboxId:e1b7e71701a6300d0368715b3ead5c3d45d5044a83e61dcf39e592d182fc1042,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727812372884212491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bf22a4e81d08ba90ba65b23ac7659,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36dd8c1d835614188d453bde713aaf3ead777013b290b859b9ef1cf875c1b685,PodSandboxId:0108e2f859c4b9e450abbb0dc80b3ea050d18785ce021d693ab87b230b013c18,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727812043159380149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nhjc5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10d9056-2747-4c16-be7f-478f969d5d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50abdc221179748e0e77361468d2966268a00dbb73094bf062c3fc4dd4abab85,PodSandboxId:9013eb36b71b5e1fe146ed5c7cacfd3d5fa4aac2a0073e7c062d23327122e28e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727811987017516787,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swx5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa4c293-22e5-4a77-b851-3ae28e745a58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73d14dc9d500ff21c9c06b2a020bc93776770a604b0f75cc50773c705cd6ff0,PodSandboxId:b9b43bf6e515ac84d92762f64983b1829820bf2bd6a095077cc936f208c9d88f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727811986960924497,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 279c7942-da40-453f-b94f-2795b1c84c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74cc8c8d45eb8e370db6e5a01ea691e69597f620ba5e51937ab8a6c8a4efb2ab,PodSandboxId:7f69035cb9fd7cd59575e995ecaf53d33d5b0cc28348f2994cc6d8258bbe1a39,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727811975043730765,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7kvjb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 3af66262-e3a8-4687-bf10-f7e139689769,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c753d689839b591bdbd1e4b857d4fc8207b7f30ad75e876545e5c20f6aea9c17,PodSandboxId:12f30e785442ae580bcbeff933862b69e54008da91a2c67f36e2a9d0c48d8e72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727811974820289353,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqznz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4551ba7-af95-4699-a104
-1e0b0c3c965b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99e0c7308d4811ed9a37c1503f0c4f652731732c727fddb6b2ca91a0d1d1207a,PodSandboxId:4f3f4a90ff8fba31bc0128beb7941fee07256d96ae2bb46791196434f6cc2a35,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727811963393478784,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054fcb6
311ec0c9eed872f8380f10fef,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a87badf95fa60dbc3077c37ef50ce351665296ba8906532d97ef8dc1f7e01639,PodSandboxId:c8353829f738e155c8a3fd6c5b006eca9c86c3471c91bda191a91b08ce182339,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727811963386369788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b4c161593a5efe7a8c831c698c0fa1,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5825a9ff6472b047a265e5afd5811414044723e46816206e4beb6b418b69e24,PodSandboxId:5790e9d0912b0802ba052d340247180ecd19df92be2648f40ae71124e5e27d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727811963393006733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14066c00a6e2b917f1e44bc7ec721b3b,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19d51ef666dc5924ed51fc7a56fd063604dfa6c7c7ed8f6f1edf773e2ddf803a,PodSandboxId:d1410a265d7f54ed665f318621cb9f3ed483ad896ddb5139c6fc994458d41b4a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727811963264479921,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bf22a4e81d08ba90ba65b23ac7659,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84ec6b0d-3814-4934-8114-b965b2024149 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.157532016Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff348d44-170b-4424-a4e8-ab03816bc38c name=/runtime.v1.RuntimeService/Version
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.157659815Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff348d44-170b-4424-a4e8-ab03816bc38c name=/runtime.v1.RuntimeService/Version
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.158722188Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=056567fe-5ff6-443f-b9e9-06d95dcc819e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.159144714Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812476159120509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=056567fe-5ff6-443f-b9e9-06d95dcc819e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.159713993Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3fedadaa-11f0-4284-8e05-abfce86f166e name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.159778567Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3fedadaa-11f0-4284-8e05-abfce86f166e name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:54:36 multinode-325713 crio[2688]: time="2024-10-01 19:54:36.160160707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b834b2eb85399ce2fb868e88427aa76dca10dbdd8cbbaa50408427c4924cfc2,PodSandboxId:8d512f727350db7e42fb355890131b9202b3b5ac2f7cf97bb0ac0897743a2887,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727812411494846624,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nhjc5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10d9056-2747-4c16-be7f-478f969d5d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70eda3c33aa85b6d005a8045ca58d14983670f13a2f0a3403770d9d77d1eaf5,PodSandboxId:f6df19e7ef815784870ba6cfaa2a215f639a34f4ba4aa828afe952fa36f201ae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727812377889364661,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7kvjb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af66262-e3a8-4687-bf10-f7e139689769,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8622827e0bea2327d91748e496fa5a35b91539cac3bc3d17318689ecbf817385,PodSandboxId:7a04724b1fa98a29c2c22ae184bff58fbe3f0d94fd27dc3b3789b7be5c370477,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727812377961693569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swx5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa4c293-22e5-4a77-b851-3ae28e745a58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e133d880f95931cecea791c307dbd3b63126f009258e015eab481ab1ffde4c7a,PodSandboxId:81b1c36d12ae4bd3bf4d4982f3599008911582918553c0a746f84be09c849dc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727812377886814724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqznz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4551ba7-af95-4699-a104-1e0b0c3c965b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f406b30d4c63e456cd07ab7d97da4bf2e332b36fcc54315320d56f51c5399c,PodSandboxId:b10670eed02ee460bbe023ab5779a9f0a7aed0572e68bca0fec3438878b0a36e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727812377839104762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279c7942-da40-453f-b94f-2795b1c84c5d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8cca9641eaeaec6d2442941fde94017ec30b1be1fa78944aa88014145d48b14,PodSandboxId:d177cd495f846e32744ad856b6fc7972ac9d9a2642ad5545b96f957ef7b1f3ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727812372965848782,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b4c161593a5efe7a8c831c698c0fa1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91728fb10efec288907940bbd25068fbc59d3bf362903c3ee875cb425a7e9570,PodSandboxId:b9ca6a212d89c8fee5b39731beaa923e3c583d0537768e387c542d6f17a7845c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727812372994973249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054fcb6311ec0c9eed872f8380f10fef,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb81c99fa1c2e42286c70d520a49c2ad2ec6c2c8c728399216de29090137bf71,PodSandboxId:7590ee8b5f847ce6d77d4d8d1ae22ae7e1de9601c8c50fdce24c675a9303bffe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727812372902988561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14066c00a6e2b917f1e44bc7ec721b3b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:454294743e585aefdb165075a88ffc9546bf7dac940543d3064fa8252cfe5b64,PodSandboxId:e1b7e71701a6300d0368715b3ead5c3d45d5044a83e61dcf39e592d182fc1042,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727812372884212491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bf22a4e81d08ba90ba65b23ac7659,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36dd8c1d835614188d453bde713aaf3ead777013b290b859b9ef1cf875c1b685,PodSandboxId:0108e2f859c4b9e450abbb0dc80b3ea050d18785ce021d693ab87b230b013c18,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727812043159380149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nhjc5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10d9056-2747-4c16-be7f-478f969d5d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50abdc221179748e0e77361468d2966268a00dbb73094bf062c3fc4dd4abab85,PodSandboxId:9013eb36b71b5e1fe146ed5c7cacfd3d5fa4aac2a0073e7c062d23327122e28e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727811987017516787,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swx5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa4c293-22e5-4a77-b851-3ae28e745a58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73d14dc9d500ff21c9c06b2a020bc93776770a604b0f75cc50773c705cd6ff0,PodSandboxId:b9b43bf6e515ac84d92762f64983b1829820bf2bd6a095077cc936f208c9d88f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727811986960924497,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 279c7942-da40-453f-b94f-2795b1c84c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74cc8c8d45eb8e370db6e5a01ea691e69597f620ba5e51937ab8a6c8a4efb2ab,PodSandboxId:7f69035cb9fd7cd59575e995ecaf53d33d5b0cc28348f2994cc6d8258bbe1a39,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727811975043730765,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7kvjb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 3af66262-e3a8-4687-bf10-f7e139689769,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c753d689839b591bdbd1e4b857d4fc8207b7f30ad75e876545e5c20f6aea9c17,PodSandboxId:12f30e785442ae580bcbeff933862b69e54008da91a2c67f36e2a9d0c48d8e72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727811974820289353,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqznz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4551ba7-af95-4699-a104
-1e0b0c3c965b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99e0c7308d4811ed9a37c1503f0c4f652731732c727fddb6b2ca91a0d1d1207a,PodSandboxId:4f3f4a90ff8fba31bc0128beb7941fee07256d96ae2bb46791196434f6cc2a35,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727811963393478784,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054fcb6
311ec0c9eed872f8380f10fef,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a87badf95fa60dbc3077c37ef50ce351665296ba8906532d97ef8dc1f7e01639,PodSandboxId:c8353829f738e155c8a3fd6c5b006eca9c86c3471c91bda191a91b08ce182339,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727811963386369788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b4c161593a5efe7a8c831c698c0fa1,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5825a9ff6472b047a265e5afd5811414044723e46816206e4beb6b418b69e24,PodSandboxId:5790e9d0912b0802ba052d340247180ecd19df92be2648f40ae71124e5e27d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727811963393006733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14066c00a6e2b917f1e44bc7ec721b3b,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19d51ef666dc5924ed51fc7a56fd063604dfa6c7c7ed8f6f1edf773e2ddf803a,PodSandboxId:d1410a265d7f54ed665f318621cb9f3ed483ad896ddb5139c6fc994458d41b4a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727811963264479921,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bf22a4e81d08ba90ba65b23ac7659,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3fedadaa-11f0-4284-8e05-abfce86f166e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6b834b2eb8539       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   8d512f727350d       busybox-7dff88458-nhjc5
	8622827e0bea2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   7a04724b1fa98       coredns-7c65d6cfc9-swx5f
	e70eda3c33aa8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   f6df19e7ef815       kindnet-7kvjb
	e133d880f9593       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   81b1c36d12ae4       kube-proxy-wqznz
	b8f406b30d4c6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   b10670eed02ee       storage-provisioner
	91728fb10efec       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   b9ca6a212d89c       kube-controller-manager-multinode-325713
	e8cca9641eaea       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   d177cd495f846       etcd-multinode-325713
	cb81c99fa1c2e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   7590ee8b5f847       kube-scheduler-multinode-325713
	454294743e585       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   e1b7e71701a63       kube-apiserver-multinode-325713
	36dd8c1d83561       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   0108e2f859c4b       busybox-7dff88458-nhjc5
	50abdc2211797       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      8 minutes ago        Exited              coredns                   0                   9013eb36b71b5       coredns-7c65d6cfc9-swx5f
	e73d14dc9d500       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   b9b43bf6e515a       storage-provisioner
	74cc8c8d45eb8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   7f69035cb9fd7       kindnet-7kvjb
	c753d689839b5       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   12f30e785442a       kube-proxy-wqznz
	99e0c7308d481       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   4f3f4a90ff8fb       kube-controller-manager-multinode-325713
	b5825a9ff6472       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   5790e9d0912b0       kube-scheduler-multinode-325713
	a87badf95fa60       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   c8353829f738e       etcd-multinode-325713
	19d51ef666dc5       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   d1410a265d7f5       kube-apiserver-multinode-325713
	
	
	==> coredns [50abdc221179748e0e77361468d2966268a00dbb73094bf062c3fc4dd4abab85] <==
	[INFO] 10.244.0.3:34664 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001904636s
	[INFO] 10.244.0.3:53084 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000142914s
	[INFO] 10.244.0.3:53976 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111019s
	[INFO] 10.244.0.3:45703 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001228243s
	[INFO] 10.244.0.3:56693 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091392s
	[INFO] 10.244.0.3:46093 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000153855s
	[INFO] 10.244.0.3:46598 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007144s
	[INFO] 10.244.1.2:39262 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000299375s
	[INFO] 10.244.1.2:58993 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000171423s
	[INFO] 10.244.1.2:33484 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000217752s
	[INFO] 10.244.1.2:48567 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000152447s
	[INFO] 10.244.0.3:42810 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203058s
	[INFO] 10.244.0.3:39523 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103125s
	[INFO] 10.244.0.3:58960 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158691s
	[INFO] 10.244.0.3:56682 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092203s
	[INFO] 10.244.1.2:54920 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221604s
	[INFO] 10.244.1.2:42519 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00019814s
	[INFO] 10.244.1.2:59332 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0002861s
	[INFO] 10.244.1.2:36941 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000168401s
	[INFO] 10.244.0.3:60260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000173387s
	[INFO] 10.244.0.3:34031 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000091425s
	[INFO] 10.244.0.3:48273 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082832s
	[INFO] 10.244.0.3:50031 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000064227s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8622827e0bea2327d91748e496fa5a35b91539cac3bc3d17318689ecbf817385] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49996 - 391 "HINFO IN 3521685697945954381.4462412365812783941. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012983011s
	
	
	==> describe nodes <==
	Name:               multinode-325713
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-325713
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=multinode-325713
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T19_46_10_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:46:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-325713
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:54:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:52:56 +0000   Tue, 01 Oct 2024 19:46:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:52:56 +0000   Tue, 01 Oct 2024 19:46:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:52:56 +0000   Tue, 01 Oct 2024 19:46:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:52:56 +0000   Tue, 01 Oct 2024 19:46:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.165
	  Hostname:    multinode-325713
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8239c3eb3fc9460a961da917ebe46ad0
	  System UUID:                8239c3eb-3fc9-460a-961d-a917ebe46ad0
	  Boot ID:                    078d2ed7-8b7e-4053-8168-a2fd02e67089
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nhjc5                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m17s
	  kube-system                 coredns-7c65d6cfc9-swx5f                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m22s
	  kube-system                 etcd-multinode-325713                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m27s
	  kube-system                 kindnet-7kvjb                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m22s
	  kube-system                 kube-apiserver-multinode-325713             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 kube-controller-manager-multinode-325713    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 kube-proxy-wqznz                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 kube-scheduler-multinode-325713             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m21s                  kube-proxy       
	  Normal  Starting                 98s                    kube-proxy       
	  Normal  Starting                 8m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m34s (x8 over 8m34s)  kubelet          Node multinode-325713 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m34s (x8 over 8m34s)  kubelet          Node multinode-325713 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m34s (x7 over 8m34s)  kubelet          Node multinode-325713 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m27s                  kubelet          Node multinode-325713 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  8m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m27s                  kubelet          Node multinode-325713 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m27s                  kubelet          Node multinode-325713 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m27s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m23s                  node-controller  Node multinode-325713 event: Registered Node multinode-325713 in Controller
	  Normal  NodeReady                8m10s                  kubelet          Node multinode-325713 status is now: NodeReady
	  Normal  Starting                 104s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s (x8 over 104s)    kubelet          Node multinode-325713 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s (x8 over 104s)    kubelet          Node multinode-325713 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x7 over 104s)    kubelet          Node multinode-325713 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  104s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           96s                    node-controller  Node multinode-325713 event: Registered Node multinode-325713 in Controller
	
	
	Name:               multinode-325713-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-325713-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=multinode-325713
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T19_53_36_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:53:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-325713-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:54:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:54:06 +0000   Tue, 01 Oct 2024 19:53:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:54:06 +0000   Tue, 01 Oct 2024 19:53:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:54:06 +0000   Tue, 01 Oct 2024 19:53:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:54:06 +0000   Tue, 01 Oct 2024 19:53:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    multinode-325713-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c04112f801c44849a3b43c67917acef8
	  System UUID:                c04112f8-01c4-4849-a3b4-3c67917acef8
	  Boot ID:                    1c3b35d5-7e26-42ef-a4a8-5dfd5f914bf4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lppvx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kindnet-h8ld7              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m39s
	  kube-system                 kube-proxy-kf9lq           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 55s                    kube-proxy       
	  Normal  Starting                 7m33s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m39s (x2 over 7m40s)  kubelet          Node multinode-325713-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m39s (x2 over 7m40s)  kubelet          Node multinode-325713-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m39s (x2 over 7m40s)  kubelet          Node multinode-325713-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m19s                  kubelet          Node multinode-325713-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  61s (x2 over 61s)      kubelet          Node multinode-325713-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x2 over 61s)      kubelet          Node multinode-325713-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 61s)      kubelet          Node multinode-325713-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  61s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           56s                    node-controller  Node multinode-325713-m02 event: Registered Node multinode-325713-m02 in Controller
	  Normal  NodeReady                41s                    kubelet          Node multinode-325713-m02 status is now: NodeReady
	
	
	Name:               multinode-325713-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-325713-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=multinode-325713
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T19_54_15_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:54:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-325713-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:54:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:54:33 +0000   Tue, 01 Oct 2024 19:54:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:54:33 +0000   Tue, 01 Oct 2024 19:54:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:54:33 +0000   Tue, 01 Oct 2024 19:54:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:54:33 +0000   Tue, 01 Oct 2024 19:54:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    multinode-325713-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d3016b7e3777475d8f64b82b7f08491e
	  System UUID:                d3016b7e-3777-475d-8f64-b82b7f08491e
	  Boot ID:                    34a7bf24-15d2-4ce6-9387-76686219c117
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7xgfk       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m42s
	  kube-system                 kube-proxy-7wwrh    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m35s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m45s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m42s (x2 over 6m42s)  kubelet     Node multinode-325713-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m42s (x2 over 6m42s)  kubelet     Node multinode-325713-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m42s (x2 over 6m42s)  kubelet     Node multinode-325713-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m42s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m21s                  kubelet     Node multinode-325713-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m50s (x2 over 5m50s)  kubelet     Node multinode-325713-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m50s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m50s (x2 over 5m50s)  kubelet     Node multinode-325713-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m50s (x2 over 5m50s)  kubelet     Node multinode-325713-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m30s                  kubelet     Node multinode-325713-m03 status is now: NodeReady
	  Normal  Starting                 22s                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet     Node multinode-325713-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet     Node multinode-325713-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet     Node multinode-325713-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-325713-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.057422] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.179463] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.126943] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.280039] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +3.805720] systemd-fstab-generator[738]: Ignoring "noauto" option for root device
	[Oct 1 19:46] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.059319] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.994477] systemd-fstab-generator[1208]: Ignoring "noauto" option for root device
	[  +0.097991] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.132092] systemd-fstab-generator[1312]: Ignoring "noauto" option for root device
	[  +0.137973] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.491133] kauditd_printk_skb: 69 callbacks suppressed
	[Oct 1 19:47] kauditd_printk_skb: 12 callbacks suppressed
	[Oct 1 19:52] systemd-fstab-generator[2612]: Ignoring "noauto" option for root device
	[  +0.145055] systemd-fstab-generator[2624]: Ignoring "noauto" option for root device
	[  +0.181365] systemd-fstab-generator[2638]: Ignoring "noauto" option for root device
	[  +0.136701] systemd-fstab-generator[2650]: Ignoring "noauto" option for root device
	[  +0.279353] systemd-fstab-generator[2678]: Ignoring "noauto" option for root device
	[  +6.384532] systemd-fstab-generator[2773]: Ignoring "noauto" option for root device
	[  +0.082798] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.990543] systemd-fstab-generator[2893]: Ignoring "noauto" option for root device
	[  +5.716695] kauditd_printk_skb: 74 callbacks suppressed
	[Oct 1 19:53] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.093272] systemd-fstab-generator[3748]: Ignoring "noauto" option for root device
	[ +20.528732] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [a87badf95fa60dbc3077c37ef50ce351665296ba8906532d97ef8dc1f7e01639] <==
	{"level":"info","ts":"2024-10-01T19:46:04.447958Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T19:46:04.441276Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T19:46:04.444434Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T19:46:04.455762Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.165:2379"}
	{"level":"info","ts":"2024-10-01T19:46:04.472463Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-01T19:46:57.093437Z","caller":"traceutil/trace.go:171","msg":"trace[1354207169] transaction","detail":"{read_only:false; response_revision:474; number_of_response:1; }","duration":"231.607464ms","start":"2024-10-01T19:46:56.861803Z","end":"2024-10-01T19:46:57.093410Z","steps":["trace[1354207169] 'process raft request'  (duration: 227.227597ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T19:47:00.449273Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.528566ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15705901595387214331 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kindnet-h8ld7.17fa6be0ca6e9b9d\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kindnet-h8ld7.17fa6be0ca6e9b9d\" value_size:676 lease:6482529558532437523 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-10-01T19:47:00.449389Z","caller":"traceutil/trace.go:171","msg":"trace[306755950] transaction","detail":"{read_only:false; response_revision:507; number_of_response:1; }","duration":"173.637589ms","start":"2024-10-01T19:47:00.275735Z","end":"2024-10-01T19:47:00.449373Z","steps":["trace[306755950] 'process raft request'  (duration: 41.517342ms)","trace[306755950] 'compare'  (duration: 131.414208ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-01T19:47:54.563676Z","caller":"traceutil/trace.go:171","msg":"trace[1298207694] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"221.532393ms","start":"2024-10-01T19:47:54.342128Z","end":"2024-10-01T19:47:54.563660Z","steps":["trace[1298207694] 'process raft request'  (duration: 221.306374ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T19:47:54.563644Z","caller":"traceutil/trace.go:171","msg":"trace[536335664] linearizableReadLoop","detail":"{readStateIndex:642; appliedIndex:641; }","duration":"197.941427ms","start":"2024-10-01T19:47:54.365680Z","end":"2024-10-01T19:47:54.563621Z","steps":["trace[536335664] 'read index received'  (duration: 197.720553ms)","trace[536335664] 'applied index is now lower than readState.Index'  (duration: 219.861µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-01T19:47:54.563834Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.082016ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-325713-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T19:47:54.564075Z","caller":"traceutil/trace.go:171","msg":"trace[220120163] range","detail":"{range_begin:/registry/minions/multinode-325713-m03; range_end:; response_count:0; response_revision:610; }","duration":"198.379891ms","start":"2024-10-01T19:47:54.365674Z","end":"2024-10-01T19:47:54.564054Z","steps":["trace[220120163] 'agreement among raft nodes before linearized reading'  (duration: 198.045993ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T19:47:57.764246Z","caller":"traceutil/trace.go:171","msg":"trace[1694425089] transaction","detail":"{read_only:false; response_revision:637; number_of_response:1; }","duration":"169.896676ms","start":"2024-10-01T19:47:57.594331Z","end":"2024-10-01T19:47:57.764228Z","steps":["trace[1694425089] 'process raft request'  (duration: 169.759607ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T19:48:03.643460Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.809709ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15705901595387214904 > lease_revoke:<id:59f692499e4bbb8c>","response":"size:28"}
	{"level":"info","ts":"2024-10-01T19:48:50.885821Z","caller":"traceutil/trace.go:171","msg":"trace[97414680] transaction","detail":"{read_only:false; response_revision:740; number_of_response:1; }","duration":"171.598158ms","start":"2024-10-01T19:48:50.714163Z","end":"2024-10-01T19:48:50.885762Z","steps":["trace[97414680] 'process raft request'  (duration: 171.168601ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T19:51:11.644235Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-01T19:51:11.644349Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-325713","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"]}
	{"level":"warn","ts":"2024-10-01T19:51:11.644471Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-01T19:51:11.644647Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-01T19:51:11.721915Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.165:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-01T19:51:11.721973Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.165:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-01T19:51:11.723774Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ffc3b7517aaad9f6","current-leader-member-id":"ffc3b7517aaad9f6"}
	{"level":"info","ts":"2024-10-01T19:51:11.726502Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-10-01T19:51:11.726797Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-10-01T19:51:11.726892Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-325713","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"]}
	
	
	==> etcd [e8cca9641eaeaec6d2442941fde94017ec30b1be1fa78944aa88014145d48b14] <==
	{"level":"info","ts":"2024-10-01T19:52:53.571717Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"58f0a6b9f17e1f60","local-member-id":"ffc3b7517aaad9f6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T19:52:53.571759Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T19:52:53.587369Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-01T19:52:53.587682Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ffc3b7517aaad9f6","initial-advertise-peer-urls":["https://192.168.39.165:2380"],"listen-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.165:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-01T19:52:53.587720Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-01T19:52:53.587815Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-10-01T19:52:53.587834Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-10-01T19:52:55.384683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-01T19:52:55.384875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-01T19:52:55.384945Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 received MsgPreVoteResp from ffc3b7517aaad9f6 at term 2"}
	{"level":"info","ts":"2024-10-01T19:52:55.384985Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became candidate at term 3"}
	{"level":"info","ts":"2024-10-01T19:52:55.385010Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 received MsgVoteResp from ffc3b7517aaad9f6 at term 3"}
	{"level":"info","ts":"2024-10-01T19:52:55.385038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became leader at term 3"}
	{"level":"info","ts":"2024-10-01T19:52:55.385064Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ffc3b7517aaad9f6 elected leader ffc3b7517aaad9f6 at term 3"}
	{"level":"info","ts":"2024-10-01T19:52:55.393906Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T19:52:55.393860Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ffc3b7517aaad9f6","local-member-attributes":"{Name:multinode-325713 ClientURLs:[https://192.168.39.165:2379]}","request-path":"/0/members/ffc3b7517aaad9f6/attributes","cluster-id":"58f0a6b9f17e1f60","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-01T19:52:55.394888Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T19:52:55.395158Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-01T19:52:55.395186Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-01T19:52:55.395314Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T19:52:55.395746Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T19:52:55.396557Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.165:2379"}
	{"level":"info","ts":"2024-10-01T19:52:55.397809Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-01T19:53:40.220136Z","caller":"traceutil/trace.go:171","msg":"trace[1499003064] transaction","detail":"{read_only:false; response_revision:1076; number_of_response:1; }","duration":"171.764299ms","start":"2024-10-01T19:53:40.048337Z","end":"2024-10-01T19:53:40.220101Z","steps":["trace[1499003064] 'process raft request'  (duration: 171.623621ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T19:54:18.072134Z","caller":"traceutil/trace.go:171","msg":"trace[1789535552] transaction","detail":"{read_only:false; response_revision:1168; number_of_response:1; }","duration":"109.263722ms","start":"2024-10-01T19:54:17.962855Z","end":"2024-10-01T19:54:18.072119Z","steps":["trace[1789535552] 'process raft request'  (duration: 109.140663ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:54:36 up 9 min,  0 users,  load average: 0.35, 0.21, 0.10
	Linux multinode-325713 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [74cc8c8d45eb8e370db6e5a01ea691e69597f620ba5e51937ab8a6c8a4efb2ab] <==
	I1001 19:50:26.031478       1 main.go:322] Node multinode-325713-m03 has CIDR [10.244.3.0/24] 
	I1001 19:50:36.025474       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I1001 19:50:36.025525       1 main.go:299] handling current node
	I1001 19:50:36.025540       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1001 19:50:36.025546       1 main.go:322] Node multinode-325713-m02 has CIDR [10.244.1.0/24] 
	I1001 19:50:36.025729       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I1001 19:50:36.025748       1 main.go:322] Node multinode-325713-m03 has CIDR [10.244.3.0/24] 
	I1001 19:50:46.029761       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I1001 19:50:46.029887       1 main.go:322] Node multinode-325713-m03 has CIDR [10.244.3.0/24] 
	I1001 19:50:46.030059       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I1001 19:50:46.030083       1 main.go:299] handling current node
	I1001 19:50:46.030105       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1001 19:50:46.030121       1 main.go:322] Node multinode-325713-m02 has CIDR [10.244.1.0/24] 
	I1001 19:50:56.030801       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I1001 19:50:56.031048       1 main.go:299] handling current node
	I1001 19:50:56.031086       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1001 19:50:56.031110       1 main.go:322] Node multinode-325713-m02 has CIDR [10.244.1.0/24] 
	I1001 19:50:56.031347       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I1001 19:50:56.031378       1 main.go:322] Node multinode-325713-m03 has CIDR [10.244.3.0/24] 
	I1001 19:51:06.033064       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I1001 19:51:06.033187       1 main.go:299] handling current node
	I1001 19:51:06.033227       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1001 19:51:06.033233       1 main.go:322] Node multinode-325713-m02 has CIDR [10.244.1.0/24] 
	I1001 19:51:06.033380       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I1001 19:51:06.033403       1 main.go:322] Node multinode-325713-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [e70eda3c33aa85b6d005a8045ca58d14983670f13a2f0a3403770d9d77d1eaf5] <==
	I1001 19:53:48.829707       1 main.go:322] Node multinode-325713-m03 has CIDR [10.244.3.0/24] 
	I1001 19:53:58.827270       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I1001 19:53:58.827441       1 main.go:299] handling current node
	I1001 19:53:58.827484       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1001 19:53:58.827512       1 main.go:322] Node multinode-325713-m02 has CIDR [10.244.1.0/24] 
	I1001 19:53:58.827718       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I1001 19:53:58.827755       1 main.go:322] Node multinode-325713-m03 has CIDR [10.244.3.0/24] 
	I1001 19:54:08.827132       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I1001 19:54:08.827255       1 main.go:299] handling current node
	I1001 19:54:08.827300       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1001 19:54:08.827373       1 main.go:322] Node multinode-325713-m02 has CIDR [10.244.1.0/24] 
	I1001 19:54:08.827556       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I1001 19:54:08.827658       1 main.go:322] Node multinode-325713-m03 has CIDR [10.244.3.0/24] 
	I1001 19:54:18.827020       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I1001 19:54:18.827053       1 main.go:299] handling current node
	I1001 19:54:18.827066       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1001 19:54:18.827071       1 main.go:322] Node multinode-325713-m02 has CIDR [10.244.1.0/24] 
	I1001 19:54:18.827218       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I1001 19:54:18.827239       1 main.go:322] Node multinode-325713-m03 has CIDR [10.244.2.0/24] 
	I1001 19:54:28.829041       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I1001 19:54:28.829092       1 main.go:299] handling current node
	I1001 19:54:28.829106       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1001 19:54:28.829112       1 main.go:322] Node multinode-325713-m02 has CIDR [10.244.1.0/24] 
	I1001 19:54:28.829257       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I1001 19:54:28.829274       1 main.go:322] Node multinode-325713-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [19d51ef666dc5924ed51fc7a56fd063604dfa6c7c7ed8f6f1edf773e2ddf803a] <==
	I1001 19:51:11.671009       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	W1001 19:51:11.673835       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1001 19:51:11.679054       1 storage_flowcontrol.go:186] APF bootstrap ensurer is exiting
	I1001 19:51:11.681072       1 cluster_authentication_trust_controller.go:466] Shutting down cluster_authentication_trust_controller controller
	I1001 19:51:11.681298       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I1001 19:51:11.681337       1 establishing_controller.go:92] Shutting down EstablishingController
	I1001 19:51:11.681352       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I1001 19:51:11.681370       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I1001 19:51:11.681397       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I1001 19:51:11.681421       1 controller.go:132] Ending legacy_token_tracking_controller
	I1001 19:51:11.681443       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I1001 19:51:11.681458       1 naming_controller.go:305] Shutting down NamingConditionController
	I1001 19:51:11.681487       1 controller.go:120] Shutting down OpenAPI V3 controller
	I1001 19:51:11.681510       1 autoregister_controller.go:168] Shutting down autoregister controller
	I1001 19:51:11.681541       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I1001 19:51:11.682020       1 controller.go:170] Shutting down OpenAPI controller
	I1001 19:51:11.682054       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I1001 19:51:11.682079       1 nonstructuralschema_controller.go:207] Shutting down NonStructuralSchemaConditionController
	I1001 19:51:11.682097       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I1001 19:51:11.683059       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I1001 19:51:11.683083       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	I1001 19:51:11.683099       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I1001 19:51:11.687793       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	W1001 19:51:11.690966       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 19:51:11.691043       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [454294743e585aefdb165075a88ffc9546bf7dac940543d3064fa8252cfe5b64] <==
	I1001 19:52:56.730161       1 aggregator.go:171] initial CRD sync complete...
	I1001 19:52:56.730182       1 autoregister_controller.go:144] Starting autoregister controller
	I1001 19:52:56.730189       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1001 19:52:56.730193       1 cache.go:39] Caches are synced for autoregister controller
	I1001 19:52:56.745684       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1001 19:52:56.745806       1 policy_source.go:224] refreshing policies
	I1001 19:52:56.746635       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1001 19:52:56.786887       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1001 19:52:56.787253       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1001 19:52:56.787310       1 shared_informer.go:320] Caches are synced for configmaps
	I1001 19:52:56.787357       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1001 19:52:56.787238       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1001 19:52:56.788925       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1001 19:52:56.789021       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1001 19:52:56.793553       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E1001 19:52:56.795692       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1001 19:52:56.807226       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1001 19:52:57.593505       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1001 19:52:58.802961       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1001 19:52:58.958315       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1001 19:52:58.971905       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1001 19:52:59.050630       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1001 19:52:59.060920       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1001 19:53:00.043246       1 controller.go:615] quota admission added evaluator for: endpoints
	I1001 19:53:00.442839       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [91728fb10efec288907940bbd25068fbc59d3bf362903c3ee875cb425a7e9570] <==
	I1001 19:53:55.522332       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m02"
	I1001 19:53:55.530945       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.523µs"
	I1001 19:53:55.541892       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.541µs"
	I1001 19:54:00.227667       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m02"
	I1001 19:54:00.746658       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.872706ms"
	I1001 19:54:00.746956       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.761µs"
	I1001 19:54:06.612080       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m02"
	I1001 19:54:13.215752       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:13.232290       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:13.466838       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-325713-m02"
	I1001 19:54:13.466948       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:14.583004       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-325713-m02"
	I1001 19:54:14.583423       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-325713-m03\" does not exist"
	I1001 19:54:14.593814       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-325713-m03" podCIDRs=["10.244.2.0/24"]
	I1001 19:54:14.593855       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:14.594073       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:14.606451       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:14.995082       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:15.325003       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:15.460644       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:24.693123       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:33.279070       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:33.279340       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-325713-m03"
	I1001 19:54:33.293523       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:35.241489       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	
	
	==> kube-controller-manager [99e0c7308d4811ed9a37c1503f0c4f652731732c727fddb6b2ca91a0d1d1207a] <==
	I1001 19:48:45.275166       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:48:45.275227       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-325713-m02"
	I1001 19:48:46.437029       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-325713-m03\" does not exist"
	I1001 19:48:46.437619       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-325713-m02"
	I1001 19:48:46.447709       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-325713-m03" podCIDRs=["10.244.3.0/24"]
	I1001 19:48:46.447944       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:48:46.448142       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:48:46.466960       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:48:46.843979       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:48:47.217507       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:48:48.270179       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:48:56.717472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:49:06.078704       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-325713-m02"
	I1001 19:49:06.079173       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:49:06.090897       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:49:08.197269       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:49:48.215127       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:49:48.215626       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-325713-m02"
	I1001 19:49:48.233741       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:49:53.249182       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m02"
	I1001 19:49:53.263364       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m02"
	I1001 19:49:53.305459       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.674576ms"
	I1001 19:49:53.307273       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="170.732µs"
	I1001 19:49:53.317462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:50:03.394952       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m02"
	
	
	==> kube-proxy [c753d689839b591bdbd1e4b857d4fc8207b7f30ad75e876545e5c20f6aea9c17] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 19:46:15.245419       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 19:46:15.255000       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.165"]
	E1001 19:46:15.255073       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 19:46:15.307852       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 19:46:15.307891       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 19:46:15.307914       1 server_linux.go:169] "Using iptables Proxier"
	I1001 19:46:15.311233       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 19:46:15.311440       1 server.go:483] "Version info" version="v1.31.1"
	I1001 19:46:15.311451       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:46:15.328491       1 config.go:328] "Starting node config controller"
	I1001 19:46:15.328509       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 19:46:15.329498       1 config.go:199] "Starting service config controller"
	I1001 19:46:15.329507       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 19:46:15.329759       1 config.go:105] "Starting endpoint slice config controller"
	I1001 19:46:15.329770       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 19:46:15.429002       1 shared_informer.go:320] Caches are synced for node config
	I1001 19:46:15.431157       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 19:46:15.431287       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [e133d880f95931cecea791c307dbd3b63126f009258e015eab481ab1ffde4c7a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 19:52:58.223930       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 19:52:58.241763       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.165"]
	E1001 19:52:58.241904       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 19:52:58.282871       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 19:52:58.282913       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 19:52:58.282938       1 server_linux.go:169] "Using iptables Proxier"
	I1001 19:52:58.287352       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 19:52:58.287744       1 server.go:483] "Version info" version="v1.31.1"
	I1001 19:52:58.287795       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:52:58.289227       1 config.go:199] "Starting service config controller"
	I1001 19:52:58.289303       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 19:52:58.289348       1 config.go:105] "Starting endpoint slice config controller"
	I1001 19:52:58.289365       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 19:52:58.290830       1 config.go:328] "Starting node config controller"
	I1001 19:52:58.290965       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 19:52:58.389655       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 19:52:58.389703       1 shared_informer.go:320] Caches are synced for service config
	I1001 19:52:58.392682       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b5825a9ff6472b047a265e5afd5811414044723e46816206e4beb6b418b69e24] <==
	E1001 19:46:06.476270       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 19:46:07.359417       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1001 19:46:07.359523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 19:46:07.391979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1001 19:46:07.392078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 19:46:07.403883       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1001 19:46:07.403981       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1001 19:46:07.502904       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1001 19:46:07.503314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 19:46:07.583540       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1001 19:46:07.583736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 19:46:07.670647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 19:46:07.670928       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 19:46:07.670887       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1001 19:46:07.671842       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 19:46:07.821384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1001 19:46:07.821485       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 19:46:07.845437       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1001 19:46:07.845533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 19:46:07.859913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 19:46:07.859962       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 19:46:07.862212       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1001 19:46:07.862255       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1001 19:46:09.266470       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1001 19:51:11.649872       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cb81c99fa1c2e42286c70d520a49c2ad2ec6c2c8c728399216de29090137bf71] <==
	I1001 19:52:54.160233       1 serving.go:386] Generated self-signed cert in-memory
	W1001 19:52:56.666736       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1001 19:52:56.666826       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1001 19:52:56.666837       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1001 19:52:56.666862       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1001 19:52:56.717822       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1001 19:52:56.717914       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:52:56.720051       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 19:52:56.720163       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1001 19:52:56.720237       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1001 19:52:56.720329       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1001 19:52:56.821143       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 01 19:53:02 multinode-325713 kubelet[2900]: E1001 19:53:02.295442    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812382294454903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:53:05 multinode-325713 kubelet[2900]: I1001 19:53:05.756224    2900 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 01 19:53:12 multinode-325713 kubelet[2900]: E1001 19:53:12.299388    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812392297796503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:53:12 multinode-325713 kubelet[2900]: E1001 19:53:12.299435    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812392297796503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:53:22 multinode-325713 kubelet[2900]: E1001 19:53:22.302055    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812402301388584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:53:22 multinode-325713 kubelet[2900]: E1001 19:53:22.302086    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812402301388584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:53:32 multinode-325713 kubelet[2900]: E1001 19:53:32.306294    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812412306036584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:53:32 multinode-325713 kubelet[2900]: E1001 19:53:32.306713    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812412306036584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:53:42 multinode-325713 kubelet[2900]: E1001 19:53:42.309252    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812422308789281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:53:42 multinode-325713 kubelet[2900]: E1001 19:53:42.310106    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812422308789281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:53:52 multinode-325713 kubelet[2900]: E1001 19:53:52.282159    2900 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 19:53:52 multinode-325713 kubelet[2900]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 19:53:52 multinode-325713 kubelet[2900]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 19:53:52 multinode-325713 kubelet[2900]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 19:53:52 multinode-325713 kubelet[2900]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 19:53:52 multinode-325713 kubelet[2900]: E1001 19:53:52.312518    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812432312283977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:53:52 multinode-325713 kubelet[2900]: E1001 19:53:52.312542    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812432312283977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:54:02 multinode-325713 kubelet[2900]: E1001 19:54:02.315176    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812442314778893,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:54:02 multinode-325713 kubelet[2900]: E1001 19:54:02.315814    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812442314778893,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:54:12 multinode-325713 kubelet[2900]: E1001 19:54:12.318666    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812452318041079,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:54:12 multinode-325713 kubelet[2900]: E1001 19:54:12.318707    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812452318041079,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:54:22 multinode-325713 kubelet[2900]: E1001 19:54:22.319910    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812462319528916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:54:22 multinode-325713 kubelet[2900]: E1001 19:54:22.319952    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812462319528916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:54:32 multinode-325713 kubelet[2900]: E1001 19:54:32.323442    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812472322324656,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:54:32 multinode-325713 kubelet[2900]: E1001 19:54:32.323509    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812472322324656,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 19:54:35.754645   50113 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19736-11198/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-325713 -n multinode-325713
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-325713 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (328.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (144.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 stop
E1001 19:56:34.840636   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-325713 stop: exit status 82 (2m0.476105519s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-325713-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-325713 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-325713 status: (18.688975293s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 status --alsologtostderr
E1001 19:56:59.024840   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-325713 status --alsologtostderr: (3.359978294s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-325713 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-325713 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-325713 -n multinode-325713
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-325713 logs -n 25: (1.473654377s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-325713 ssh -n                                                                 | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-325713 cp multinode-325713-m02:/home/docker/cp-test.txt                       | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713:/home/docker/cp-test_multinode-325713-m02_multinode-325713.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n                                                                 | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n multinode-325713 sudo cat                                       | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | /home/docker/cp-test_multinode-325713-m02_multinode-325713.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-325713 cp multinode-325713-m02:/home/docker/cp-test.txt                       | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m03:/home/docker/cp-test_multinode-325713-m02_multinode-325713-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n                                                                 | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n multinode-325713-m03 sudo cat                                   | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | /home/docker/cp-test_multinode-325713-m02_multinode-325713-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-325713 cp testdata/cp-test.txt                                                | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n                                                                 | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-325713 cp multinode-325713-m03:/home/docker/cp-test.txt                       | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile187864513/001/cp-test_multinode-325713-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n                                                                 | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-325713 cp multinode-325713-m03:/home/docker/cp-test.txt                       | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713:/home/docker/cp-test_multinode-325713-m03_multinode-325713.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n                                                                 | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n multinode-325713 sudo cat                                       | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | /home/docker/cp-test_multinode-325713-m03_multinode-325713.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-325713 cp multinode-325713-m03:/home/docker/cp-test.txt                       | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m02:/home/docker/cp-test_multinode-325713-m03_multinode-325713-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n                                                                 | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n multinode-325713-m02 sudo cat                                   | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | /home/docker/cp-test_multinode-325713-m03_multinode-325713-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-325713 node stop m03                                                          | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	| node    | multinode-325713 node start                                                             | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:49 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-325713                                                                | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:49 UTC |                     |
	| stop    | -p multinode-325713                                                                     | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:49 UTC |                     |
	| start   | -p multinode-325713                                                                     | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:51 UTC | 01 Oct 24 19:54 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-325713                                                                | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:54 UTC |                     |
	| node    | multinode-325713 node delete                                                            | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:54 UTC | 01 Oct 24 19:54 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-325713 stop                                                                   | multinode-325713 | jenkins | v1.34.0 | 01 Oct 24 19:54 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 19:51:10
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 19:51:10.736246   48985 out.go:345] Setting OutFile to fd 1 ...
	I1001 19:51:10.736403   48985 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:51:10.736413   48985 out.go:358] Setting ErrFile to fd 2...
	I1001 19:51:10.736417   48985 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:51:10.736620   48985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 19:51:10.737172   48985 out.go:352] Setting JSON to false
	I1001 19:51:10.738064   48985 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5613,"bootTime":1727806658,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 19:51:10.738163   48985 start.go:139] virtualization: kvm guest
	I1001 19:51:10.740050   48985 out.go:177] * [multinode-325713] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 19:51:10.741271   48985 notify.go:220] Checking for updates...
	I1001 19:51:10.741281   48985 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 19:51:10.742452   48985 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 19:51:10.743588   48985 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:51:10.744620   48985 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:51:10.745680   48985 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 19:51:10.747028   48985 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 19:51:10.748532   48985 config.go:182] Loaded profile config "multinode-325713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:51:10.748638   48985 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 19:51:10.749098   48985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:51:10.749152   48985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:51:10.763932   48985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I1001 19:51:10.764461   48985 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:51:10.765060   48985 main.go:141] libmachine: Using API Version  1
	I1001 19:51:10.765083   48985 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:51:10.765429   48985 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:51:10.765585   48985 main.go:141] libmachine: (multinode-325713) Calling .DriverName
	I1001 19:51:10.803278   48985 out.go:177] * Using the kvm2 driver based on existing profile
	I1001 19:51:10.804489   48985 start.go:297] selected driver: kvm2
	I1001 19:51:10.804516   48985 start.go:901] validating driver "kvm2" against &{Name:multinode-325713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:multinode-325713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.61 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:51:10.804731   48985 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 19:51:10.805310   48985 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 19:51:10.805427   48985 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 19:51:10.821034   48985 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 19:51:10.821863   48985 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 19:51:10.821905   48985 cni.go:84] Creating CNI manager for ""
	I1001 19:51:10.821955   48985 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1001 19:51:10.822021   48985 start.go:340] cluster config:
	{Name:multinode-325713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-325713 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.61 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:51:10.822149   48985 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 19:51:10.823884   48985 out.go:177] * Starting "multinode-325713" primary control-plane node in "multinode-325713" cluster
	I1001 19:51:10.824949   48985 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:51:10.825005   48985 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 19:51:10.825023   48985 cache.go:56] Caching tarball of preloaded images
	I1001 19:51:10.825171   48985 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 19:51:10.825197   48985 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 19:51:10.825387   48985 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/config.json ...
	I1001 19:51:10.825646   48985 start.go:360] acquireMachinesLock for multinode-325713: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 19:51:10.825700   48985 start.go:364] duration metric: took 28.217µs to acquireMachinesLock for "multinode-325713"
	I1001 19:51:10.825714   48985 start.go:96] Skipping create...Using existing machine configuration
	I1001 19:51:10.825721   48985 fix.go:54] fixHost starting: 
	I1001 19:51:10.826028   48985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:51:10.826063   48985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:51:10.840709   48985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33405
	I1001 19:51:10.841082   48985 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:51:10.841571   48985 main.go:141] libmachine: Using API Version  1
	I1001 19:51:10.841594   48985 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:51:10.841924   48985 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:51:10.842150   48985 main.go:141] libmachine: (multinode-325713) Calling .DriverName
	I1001 19:51:10.842358   48985 main.go:141] libmachine: (multinode-325713) Calling .GetState
	I1001 19:51:10.844197   48985 fix.go:112] recreateIfNeeded on multinode-325713: state=Running err=<nil>
	W1001 19:51:10.844236   48985 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 19:51:10.846035   48985 out.go:177] * Updating the running kvm2 "multinode-325713" VM ...
	I1001 19:51:10.847172   48985 machine.go:93] provisionDockerMachine start ...
	I1001 19:51:10.847197   48985 main.go:141] libmachine: (multinode-325713) Calling .DriverName
	I1001 19:51:10.847424   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHHostname
	I1001 19:51:10.850357   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:10.850913   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:51:10.850952   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:10.851128   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHPort
	I1001 19:51:10.851322   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:51:10.851493   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:51:10.851661   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHUsername
	I1001 19:51:10.851842   48985 main.go:141] libmachine: Using SSH client type: native
	I1001 19:51:10.852029   48985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I1001 19:51:10.852041   48985 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 19:51:10.969781   48985 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-325713
	
	I1001 19:51:10.969808   48985 main.go:141] libmachine: (multinode-325713) Calling .GetMachineName
	I1001 19:51:10.970083   48985 buildroot.go:166] provisioning hostname "multinode-325713"
	I1001 19:51:10.970113   48985 main.go:141] libmachine: (multinode-325713) Calling .GetMachineName
	I1001 19:51:10.970325   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHHostname
	I1001 19:51:10.973103   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:10.973557   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:51:10.973584   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:10.973726   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHPort
	I1001 19:51:10.973962   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:51:10.974141   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:51:10.974287   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHUsername
	I1001 19:51:10.974415   48985 main.go:141] libmachine: Using SSH client type: native
	I1001 19:51:10.974584   48985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I1001 19:51:10.974596   48985 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-325713 && echo "multinode-325713" | sudo tee /etc/hostname
	I1001 19:51:11.105668   48985 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-325713
	
	I1001 19:51:11.105703   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHHostname
	I1001 19:51:11.109013   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:11.109414   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:51:11.109448   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:11.109617   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHPort
	I1001 19:51:11.109784   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:51:11.109964   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:51:11.110109   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHUsername
	I1001 19:51:11.110246   48985 main.go:141] libmachine: Using SSH client type: native
	I1001 19:51:11.110491   48985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I1001 19:51:11.110509   48985 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-325713' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-325713/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-325713' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 19:51:11.225617   48985 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:51:11.225657   48985 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 19:51:11.225703   48985 buildroot.go:174] setting up certificates
	I1001 19:51:11.225714   48985 provision.go:84] configureAuth start
	I1001 19:51:11.225728   48985 main.go:141] libmachine: (multinode-325713) Calling .GetMachineName
	I1001 19:51:11.226011   48985 main.go:141] libmachine: (multinode-325713) Calling .GetIP
	I1001 19:51:11.229092   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:11.229594   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:51:11.229624   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:11.229827   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHHostname
	I1001 19:51:11.232392   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:11.232794   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:51:11.232824   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:11.232988   48985 provision.go:143] copyHostCerts
	I1001 19:51:11.233016   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:51:11.233051   48985 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 19:51:11.233060   48985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 19:51:11.233128   48985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 19:51:11.233205   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:51:11.233222   48985 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 19:51:11.233228   48985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 19:51:11.233250   48985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 19:51:11.233308   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:51:11.233325   48985 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 19:51:11.233331   48985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 19:51:11.233353   48985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 19:51:11.233401   48985 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.multinode-325713 san=[127.0.0.1 192.168.39.165 localhost minikube multinode-325713]
	I1001 19:51:11.334843   48985 provision.go:177] copyRemoteCerts
	I1001 19:51:11.334897   48985 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 19:51:11.334919   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHHostname
	I1001 19:51:11.337914   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:11.338230   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:51:11.338261   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:11.338450   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHPort
	I1001 19:51:11.338642   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:51:11.338797   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHUsername
	I1001 19:51:11.338937   48985 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/multinode-325713/id_rsa Username:docker}
	I1001 19:51:11.430505   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 19:51:11.430569   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1001 19:51:11.456162   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 19:51:11.456250   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 19:51:11.486004   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 19:51:11.486087   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 19:51:11.511653   48985 provision.go:87] duration metric: took 285.917641ms to configureAuth
	I1001 19:51:11.511688   48985 buildroot.go:189] setting minikube options for container-runtime
	I1001 19:51:11.511934   48985 config.go:182] Loaded profile config "multinode-325713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:51:11.512027   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHHostname
	I1001 19:51:11.514911   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:11.515302   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:51:11.515332   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:51:11.515471   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHPort
	I1001 19:51:11.515653   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:51:11.515834   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:51:11.515986   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHUsername
	I1001 19:51:11.516153   48985 main.go:141] libmachine: Using SSH client type: native
	I1001 19:51:11.516353   48985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I1001 19:51:11.516387   48985 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 19:52:42.157687   48985 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 19:52:42.157718   48985 machine.go:96] duration metric: took 1m31.310530644s to provisionDockerMachine
	I1001 19:52:42.157730   48985 start.go:293] postStartSetup for "multinode-325713" (driver="kvm2")
	I1001 19:52:42.157741   48985 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 19:52:42.157756   48985 main.go:141] libmachine: (multinode-325713) Calling .DriverName
	I1001 19:52:42.158042   48985 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 19:52:42.158068   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHHostname
	I1001 19:52:42.161141   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:42.161584   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:52:42.161625   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:42.161786   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHPort
	I1001 19:52:42.161945   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:52:42.162083   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHUsername
	I1001 19:52:42.162195   48985 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/multinode-325713/id_rsa Username:docker}
	I1001 19:52:42.251747   48985 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 19:52:42.256158   48985 command_runner.go:130] > NAME=Buildroot
	I1001 19:52:42.256182   48985 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1001 19:52:42.256189   48985 command_runner.go:130] > ID=buildroot
	I1001 19:52:42.256195   48985 command_runner.go:130] > VERSION_ID=2023.02.9
	I1001 19:52:42.256202   48985 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1001 19:52:42.256242   48985 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 19:52:42.256255   48985 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 19:52:42.256319   48985 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 19:52:42.256429   48985 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 19:52:42.256441   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /etc/ssl/certs/184302.pem
	I1001 19:52:42.256568   48985 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 19:52:42.266325   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:52:42.289521   48985 start.go:296] duration metric: took 131.778857ms for postStartSetup
	I1001 19:52:42.289559   48985 fix.go:56] duration metric: took 1m31.463837736s for fixHost
	I1001 19:52:42.289581   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHHostname
	I1001 19:52:42.292123   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:42.292802   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:52:42.292838   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:42.293011   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHPort
	I1001 19:52:42.293184   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:52:42.293347   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:52:42.293474   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHUsername
	I1001 19:52:42.293620   48985 main.go:141] libmachine: Using SSH client type: native
	I1001 19:52:42.293850   48985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I1001 19:52:42.293869   48985 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 19:52:42.404923   48985 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727812362.386079271
	
	I1001 19:52:42.404950   48985 fix.go:216] guest clock: 1727812362.386079271
	I1001 19:52:42.404960   48985 fix.go:229] Guest: 2024-10-01 19:52:42.386079271 +0000 UTC Remote: 2024-10-01 19:52:42.289564082 +0000 UTC m=+91.589958315 (delta=96.515189ms)
	I1001 19:52:42.405011   48985 fix.go:200] guest clock delta is within tolerance: 96.515189ms
	I1001 19:52:42.405023   48985 start.go:83] releasing machines lock for "multinode-325713", held for 1m31.579313815s
	I1001 19:52:42.405056   48985 main.go:141] libmachine: (multinode-325713) Calling .DriverName
	I1001 19:52:42.405314   48985 main.go:141] libmachine: (multinode-325713) Calling .GetIP
	I1001 19:52:42.408372   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:42.408735   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:52:42.408779   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:42.408951   48985 main.go:141] libmachine: (multinode-325713) Calling .DriverName
	I1001 19:52:42.409481   48985 main.go:141] libmachine: (multinode-325713) Calling .DriverName
	I1001 19:52:42.409657   48985 main.go:141] libmachine: (multinode-325713) Calling .DriverName
	I1001 19:52:42.409767   48985 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 19:52:42.409807   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHHostname
	I1001 19:52:42.409921   48985 ssh_runner.go:195] Run: cat /version.json
	I1001 19:52:42.409944   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHHostname
	I1001 19:52:42.412688   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:42.412710   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:42.413067   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:52:42.413094   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:42.413195   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHPort
	I1001 19:52:42.413233   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:52:42.413258   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:42.413357   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:52:42.413445   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHPort
	I1001 19:52:42.413518   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHUsername
	I1001 19:52:42.413576   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:52:42.413648   48985 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/multinode-325713/id_rsa Username:docker}
	I1001 19:52:42.413680   48985 main.go:141] libmachine: (multinode-325713) Calling .GetSSHUsername
	I1001 19:52:42.413786   48985 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/multinode-325713/id_rsa Username:docker}
	I1001 19:52:42.531775   48985 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1001 19:52:42.532495   48985 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I1001 19:52:42.532649   48985 ssh_runner.go:195] Run: systemctl --version
	I1001 19:52:42.538741   48985 command_runner.go:130] > systemd 252 (252)
	I1001 19:52:42.538792   48985 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1001 19:52:42.538856   48985 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 19:52:42.695914   48985 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1001 19:52:42.702625   48985 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1001 19:52:42.703173   48985 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 19:52:42.703259   48985 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 19:52:42.713694   48985 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1001 19:52:42.713715   48985 start.go:495] detecting cgroup driver to use...
	I1001 19:52:42.713774   48985 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 19:52:42.729684   48985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 19:52:42.744592   48985 docker.go:217] disabling cri-docker service (if available) ...
	I1001 19:52:42.744650   48985 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 19:52:42.758763   48985 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 19:52:42.772847   48985 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 19:52:42.916228   48985 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 19:52:43.063910   48985 docker.go:233] disabling docker service ...
	I1001 19:52:43.063972   48985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 19:52:43.081069   48985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 19:52:43.094846   48985 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 19:52:43.235983   48985 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 19:52:43.375325   48985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 19:52:43.389733   48985 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 19:52:43.409229   48985 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1001 19:52:43.409276   48985 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 19:52:43.409330   48985 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:52:43.419916   48985 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 19:52:43.419986   48985 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:52:43.430540   48985 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:52:43.440765   48985 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:52:43.451073   48985 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 19:52:43.461793   48985 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:52:43.472503   48985 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:52:43.483966   48985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 19:52:43.494764   48985 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 19:52:43.503916   48985 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1001 19:52:43.504017   48985 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 19:52:43.513270   48985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:52:43.655462   48985 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 19:52:49.608874   48985 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.953372381s)
	I1001 19:52:49.608907   48985 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 19:52:49.608950   48985 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 19:52:49.613603   48985 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1001 19:52:49.613629   48985 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1001 19:52:49.613638   48985 command_runner.go:130] > Device: 0,22	Inode: 1321        Links: 1
	I1001 19:52:49.613647   48985 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1001 19:52:49.613652   48985 command_runner.go:130] > Access: 2024-10-01 19:52:49.494949773 +0000
	I1001 19:52:49.613659   48985 command_runner.go:130] > Modify: 2024-10-01 19:52:49.494949773 +0000
	I1001 19:52:49.613664   48985 command_runner.go:130] > Change: 2024-10-01 19:52:49.494949773 +0000
	I1001 19:52:49.613669   48985 command_runner.go:130] >  Birth: -
	I1001 19:52:49.613693   48985 start.go:563] Will wait 60s for crictl version
	I1001 19:52:49.613740   48985 ssh_runner.go:195] Run: which crictl
	I1001 19:52:49.617116   48985 command_runner.go:130] > /usr/bin/crictl
	I1001 19:52:49.617170   48985 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 19:52:49.654888   48985 command_runner.go:130] > Version:  0.1.0
	I1001 19:52:49.654914   48985 command_runner.go:130] > RuntimeName:  cri-o
	I1001 19:52:49.654921   48985 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1001 19:52:49.654928   48985 command_runner.go:130] > RuntimeApiVersion:  v1
	I1001 19:52:49.654944   48985 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 19:52:49.655012   48985 ssh_runner.go:195] Run: crio --version
	I1001 19:52:49.683006   48985 command_runner.go:130] > crio version 1.29.1
	I1001 19:52:49.683031   48985 command_runner.go:130] > Version:        1.29.1
	I1001 19:52:49.683037   48985 command_runner.go:130] > GitCommit:      unknown
	I1001 19:52:49.683042   48985 command_runner.go:130] > GitCommitDate:  unknown
	I1001 19:52:49.683046   48985 command_runner.go:130] > GitTreeState:   clean
	I1001 19:52:49.683052   48985 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1001 19:52:49.683056   48985 command_runner.go:130] > GoVersion:      go1.21.6
	I1001 19:52:49.683060   48985 command_runner.go:130] > Compiler:       gc
	I1001 19:52:49.683064   48985 command_runner.go:130] > Platform:       linux/amd64
	I1001 19:52:49.683067   48985 command_runner.go:130] > Linkmode:       dynamic
	I1001 19:52:49.683073   48985 command_runner.go:130] > BuildTags:      
	I1001 19:52:49.683077   48985 command_runner.go:130] >   containers_image_ostree_stub
	I1001 19:52:49.683081   48985 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1001 19:52:49.683084   48985 command_runner.go:130] >   btrfs_noversion
	I1001 19:52:49.683088   48985 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1001 19:52:49.683095   48985 command_runner.go:130] >   libdm_no_deferred_remove
	I1001 19:52:49.683100   48985 command_runner.go:130] >   seccomp
	I1001 19:52:49.683107   48985 command_runner.go:130] > LDFlags:          unknown
	I1001 19:52:49.683114   48985 command_runner.go:130] > SeccompEnabled:   true
	I1001 19:52:49.683121   48985 command_runner.go:130] > AppArmorEnabled:  false
	I1001 19:52:49.683195   48985 ssh_runner.go:195] Run: crio --version
	I1001 19:52:49.709814   48985 command_runner.go:130] > crio version 1.29.1
	I1001 19:52:49.709844   48985 command_runner.go:130] > Version:        1.29.1
	I1001 19:52:49.709851   48985 command_runner.go:130] > GitCommit:      unknown
	I1001 19:52:49.709857   48985 command_runner.go:130] > GitCommitDate:  unknown
	I1001 19:52:49.709861   48985 command_runner.go:130] > GitTreeState:   clean
	I1001 19:52:49.709867   48985 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1001 19:52:49.709873   48985 command_runner.go:130] > GoVersion:      go1.21.6
	I1001 19:52:49.709877   48985 command_runner.go:130] > Compiler:       gc
	I1001 19:52:49.709881   48985 command_runner.go:130] > Platform:       linux/amd64
	I1001 19:52:49.709885   48985 command_runner.go:130] > Linkmode:       dynamic
	I1001 19:52:49.709889   48985 command_runner.go:130] > BuildTags:      
	I1001 19:52:49.709893   48985 command_runner.go:130] >   containers_image_ostree_stub
	I1001 19:52:49.709897   48985 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1001 19:52:49.709901   48985 command_runner.go:130] >   btrfs_noversion
	I1001 19:52:49.709905   48985 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1001 19:52:49.709911   48985 command_runner.go:130] >   libdm_no_deferred_remove
	I1001 19:52:49.709915   48985 command_runner.go:130] >   seccomp
	I1001 19:52:49.709921   48985 command_runner.go:130] > LDFlags:          unknown
	I1001 19:52:49.709925   48985 command_runner.go:130] > SeccompEnabled:   true
	I1001 19:52:49.709930   48985 command_runner.go:130] > AppArmorEnabled:  false
	I1001 19:52:49.712655   48985 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 19:52:49.714066   48985 main.go:141] libmachine: (multinode-325713) Calling .GetIP
	I1001 19:52:49.716752   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:49.717108   48985 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:52:49.717137   48985 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:52:49.717326   48985 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 19:52:49.721372   48985 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1001 19:52:49.721574   48985 kubeadm.go:883] updating cluster {Name:multinode-325713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:multinode-325713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.61 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 19:52:49.721709   48985 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 19:52:49.721763   48985 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 19:52:49.758658   48985 command_runner.go:130] > {
	I1001 19:52:49.758685   48985 command_runner.go:130] >   "images": [
	I1001 19:52:49.758691   48985 command_runner.go:130] >     {
	I1001 19:52:49.758704   48985 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1001 19:52:49.758713   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.758723   48985 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1001 19:52:49.758728   48985 command_runner.go:130] >       ],
	I1001 19:52:49.758735   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.758748   48985 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1001 19:52:49.758763   48985 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1001 19:52:49.758769   48985 command_runner.go:130] >       ],
	I1001 19:52:49.758780   48985 command_runner.go:130] >       "size": "87190579",
	I1001 19:52:49.758790   48985 command_runner.go:130] >       "uid": null,
	I1001 19:52:49.758799   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.758811   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.758819   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.758824   48985 command_runner.go:130] >     },
	I1001 19:52:49.758829   48985 command_runner.go:130] >     {
	I1001 19:52:49.758841   48985 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1001 19:52:49.758850   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.758860   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1001 19:52:49.758869   48985 command_runner.go:130] >       ],
	I1001 19:52:49.758879   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.758906   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1001 19:52:49.758920   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1001 19:52:49.758924   48985 command_runner.go:130] >       ],
	I1001 19:52:49.758929   48985 command_runner.go:130] >       "size": "1363676",
	I1001 19:52:49.758937   48985 command_runner.go:130] >       "uid": null,
	I1001 19:52:49.758950   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.758959   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.758969   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.758977   48985 command_runner.go:130] >     },
	I1001 19:52:49.758989   48985 command_runner.go:130] >     {
	I1001 19:52:49.759000   48985 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1001 19:52:49.759009   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.759017   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1001 19:52:49.759022   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759031   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.759046   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1001 19:52:49.759062   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1001 19:52:49.759071   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759081   48985 command_runner.go:130] >       "size": "31470524",
	I1001 19:52:49.759090   48985 command_runner.go:130] >       "uid": null,
	I1001 19:52:49.759100   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.759107   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.759113   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.759121   48985 command_runner.go:130] >     },
	I1001 19:52:49.759128   48985 command_runner.go:130] >     {
	I1001 19:52:49.759137   48985 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1001 19:52:49.759146   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.759154   48985 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1001 19:52:49.759162   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759168   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.759182   48985 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1001 19:52:49.759203   48985 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1001 19:52:49.759212   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759222   48985 command_runner.go:130] >       "size": "63273227",
	I1001 19:52:49.759231   48985 command_runner.go:130] >       "uid": null,
	I1001 19:52:49.759238   48985 command_runner.go:130] >       "username": "nonroot",
	I1001 19:52:49.759248   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.759256   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.759264   48985 command_runner.go:130] >     },
	I1001 19:52:49.759269   48985 command_runner.go:130] >     {
	I1001 19:52:49.759280   48985 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1001 19:52:49.759289   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.759297   48985 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1001 19:52:49.759302   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759307   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.759316   48985 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1001 19:52:49.759324   48985 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1001 19:52:49.759333   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759337   48985 command_runner.go:130] >       "size": "149009664",
	I1001 19:52:49.759341   48985 command_runner.go:130] >       "uid": {
	I1001 19:52:49.759345   48985 command_runner.go:130] >         "value": "0"
	I1001 19:52:49.759351   48985 command_runner.go:130] >       },
	I1001 19:52:49.759354   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.759358   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.759364   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.759366   48985 command_runner.go:130] >     },
	I1001 19:52:49.759370   48985 command_runner.go:130] >     {
	I1001 19:52:49.759377   48985 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1001 19:52:49.759382   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.759387   48985 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1001 19:52:49.759392   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759396   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.759405   48985 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1001 19:52:49.759414   48985 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1001 19:52:49.759419   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759423   48985 command_runner.go:130] >       "size": "95237600",
	I1001 19:52:49.759434   48985 command_runner.go:130] >       "uid": {
	I1001 19:52:49.759445   48985 command_runner.go:130] >         "value": "0"
	I1001 19:52:49.759449   48985 command_runner.go:130] >       },
	I1001 19:52:49.759453   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.759457   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.759461   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.759464   48985 command_runner.go:130] >     },
	I1001 19:52:49.759467   48985 command_runner.go:130] >     {
	I1001 19:52:49.759473   48985 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1001 19:52:49.759479   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.759484   48985 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1001 19:52:49.759488   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759493   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.759501   48985 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1001 19:52:49.759510   48985 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1001 19:52:49.759516   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759520   48985 command_runner.go:130] >       "size": "89437508",
	I1001 19:52:49.759524   48985 command_runner.go:130] >       "uid": {
	I1001 19:52:49.759528   48985 command_runner.go:130] >         "value": "0"
	I1001 19:52:49.759531   48985 command_runner.go:130] >       },
	I1001 19:52:49.759535   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.759541   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.759545   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.759548   48985 command_runner.go:130] >     },
	I1001 19:52:49.759554   48985 command_runner.go:130] >     {
	I1001 19:52:49.759560   48985 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1001 19:52:49.759565   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.759570   48985 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1001 19:52:49.759573   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759577   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.759597   48985 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1001 19:52:49.759606   48985 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1001 19:52:49.759609   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759622   48985 command_runner.go:130] >       "size": "92733849",
	I1001 19:52:49.759628   48985 command_runner.go:130] >       "uid": null,
	I1001 19:52:49.759632   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.759635   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.759639   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.759643   48985 command_runner.go:130] >     },
	I1001 19:52:49.759648   48985 command_runner.go:130] >     {
	I1001 19:52:49.759657   48985 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1001 19:52:49.759663   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.759670   48985 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1001 19:52:49.759674   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759679   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.759694   48985 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1001 19:52:49.759708   48985 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1001 19:52:49.759716   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759722   48985 command_runner.go:130] >       "size": "68420934",
	I1001 19:52:49.759730   48985 command_runner.go:130] >       "uid": {
	I1001 19:52:49.759735   48985 command_runner.go:130] >         "value": "0"
	I1001 19:52:49.759740   48985 command_runner.go:130] >       },
	I1001 19:52:49.759746   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.759755   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.759761   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.759769   48985 command_runner.go:130] >     },
	I1001 19:52:49.759775   48985 command_runner.go:130] >     {
	I1001 19:52:49.759787   48985 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1001 19:52:49.759797   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.759804   48985 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1001 19:52:49.759811   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759815   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.759824   48985 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1001 19:52:49.759830   48985 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1001 19:52:49.759838   48985 command_runner.go:130] >       ],
	I1001 19:52:49.759844   48985 command_runner.go:130] >       "size": "742080",
	I1001 19:52:49.759856   48985 command_runner.go:130] >       "uid": {
	I1001 19:52:49.759865   48985 command_runner.go:130] >         "value": "65535"
	I1001 19:52:49.759871   48985 command_runner.go:130] >       },
	I1001 19:52:49.759880   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.759886   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.759896   48985 command_runner.go:130] >       "pinned": true
	I1001 19:52:49.759901   48985 command_runner.go:130] >     }
	I1001 19:52:49.759910   48985 command_runner.go:130] >   ]
	I1001 19:52:49.759915   48985 command_runner.go:130] > }
	I1001 19:52:49.760127   48985 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 19:52:49.760148   48985 crio.go:433] Images already preloaded, skipping extraction
	I1001 19:52:49.760210   48985 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 19:52:49.791654   48985 command_runner.go:130] > {
	I1001 19:52:49.791675   48985 command_runner.go:130] >   "images": [
	I1001 19:52:49.791681   48985 command_runner.go:130] >     {
	I1001 19:52:49.791690   48985 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1001 19:52:49.791700   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.791712   48985 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1001 19:52:49.791718   48985 command_runner.go:130] >       ],
	I1001 19:52:49.791725   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.791748   48985 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1001 19:52:49.791763   48985 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1001 19:52:49.791770   48985 command_runner.go:130] >       ],
	I1001 19:52:49.791782   48985 command_runner.go:130] >       "size": "87190579",
	I1001 19:52:49.791789   48985 command_runner.go:130] >       "uid": null,
	I1001 19:52:49.791801   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.791823   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.791834   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.791839   48985 command_runner.go:130] >     },
	I1001 19:52:49.791844   48985 command_runner.go:130] >     {
	I1001 19:52:49.791851   48985 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1001 19:52:49.791858   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.791863   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1001 19:52:49.791870   48985 command_runner.go:130] >       ],
	I1001 19:52:49.791875   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.791885   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1001 19:52:49.791895   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1001 19:52:49.791903   48985 command_runner.go:130] >       ],
	I1001 19:52:49.791911   48985 command_runner.go:130] >       "size": "1363676",
	I1001 19:52:49.791916   48985 command_runner.go:130] >       "uid": null,
	I1001 19:52:49.791925   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.791930   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.791934   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.791940   48985 command_runner.go:130] >     },
	I1001 19:52:49.791944   48985 command_runner.go:130] >     {
	I1001 19:52:49.791954   48985 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1001 19:52:49.791959   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.791966   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1001 19:52:49.791972   48985 command_runner.go:130] >       ],
	I1001 19:52:49.791977   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.791987   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1001 19:52:49.792000   48985 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1001 19:52:49.792006   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792011   48985 command_runner.go:130] >       "size": "31470524",
	I1001 19:52:49.792023   48985 command_runner.go:130] >       "uid": null,
	I1001 19:52:49.792027   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.792031   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.792037   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.792041   48985 command_runner.go:130] >     },
	I1001 19:52:49.792048   48985 command_runner.go:130] >     {
	I1001 19:52:49.792054   48985 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1001 19:52:49.792069   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.792074   48985 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1001 19:52:49.792077   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792081   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.792088   48985 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1001 19:52:49.792104   48985 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1001 19:52:49.792110   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792114   48985 command_runner.go:130] >       "size": "63273227",
	I1001 19:52:49.792118   48985 command_runner.go:130] >       "uid": null,
	I1001 19:52:49.792123   48985 command_runner.go:130] >       "username": "nonroot",
	I1001 19:52:49.792132   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.792136   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.792141   48985 command_runner.go:130] >     },
	I1001 19:52:49.792144   48985 command_runner.go:130] >     {
	I1001 19:52:49.792150   48985 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1001 19:52:49.792156   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.792161   48985 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1001 19:52:49.792164   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792168   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.792175   48985 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1001 19:52:49.792183   48985 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1001 19:52:49.792186   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792190   48985 command_runner.go:130] >       "size": "149009664",
	I1001 19:52:49.792195   48985 command_runner.go:130] >       "uid": {
	I1001 19:52:49.792198   48985 command_runner.go:130] >         "value": "0"
	I1001 19:52:49.792203   48985 command_runner.go:130] >       },
	I1001 19:52:49.792207   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.792211   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.792215   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.792218   48985 command_runner.go:130] >     },
	I1001 19:52:49.792222   48985 command_runner.go:130] >     {
	I1001 19:52:49.792229   48985 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1001 19:52:49.792233   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.792238   48985 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1001 19:52:49.792244   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792248   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.792255   48985 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1001 19:52:49.792264   48985 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1001 19:52:49.792268   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792271   48985 command_runner.go:130] >       "size": "95237600",
	I1001 19:52:49.792275   48985 command_runner.go:130] >       "uid": {
	I1001 19:52:49.792281   48985 command_runner.go:130] >         "value": "0"
	I1001 19:52:49.792289   48985 command_runner.go:130] >       },
	I1001 19:52:49.792294   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.792298   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.792301   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.792306   48985 command_runner.go:130] >     },
	I1001 19:52:49.792309   48985 command_runner.go:130] >     {
	I1001 19:52:49.792315   48985 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1001 19:52:49.792321   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.792326   48985 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1001 19:52:49.792330   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792334   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.792341   48985 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1001 19:52:49.792350   48985 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1001 19:52:49.792365   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792371   48985 command_runner.go:130] >       "size": "89437508",
	I1001 19:52:49.792376   48985 command_runner.go:130] >       "uid": {
	I1001 19:52:49.792380   48985 command_runner.go:130] >         "value": "0"
	I1001 19:52:49.792386   48985 command_runner.go:130] >       },
	I1001 19:52:49.792390   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.792394   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.792397   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.792400   48985 command_runner.go:130] >     },
	I1001 19:52:49.792404   48985 command_runner.go:130] >     {
	I1001 19:52:49.792412   48985 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1001 19:52:49.792416   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.792421   48985 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1001 19:52:49.792431   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792437   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.792450   48985 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1001 19:52:49.792460   48985 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1001 19:52:49.792463   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792467   48985 command_runner.go:130] >       "size": "92733849",
	I1001 19:52:49.792471   48985 command_runner.go:130] >       "uid": null,
	I1001 19:52:49.792475   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.792480   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.792484   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.792487   48985 command_runner.go:130] >     },
	I1001 19:52:49.792490   48985 command_runner.go:130] >     {
	I1001 19:52:49.792496   48985 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1001 19:52:49.792502   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.792507   48985 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1001 19:52:49.792511   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792515   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.792522   48985 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1001 19:52:49.792530   48985 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1001 19:52:49.792534   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792538   48985 command_runner.go:130] >       "size": "68420934",
	I1001 19:52:49.792544   48985 command_runner.go:130] >       "uid": {
	I1001 19:52:49.792548   48985 command_runner.go:130] >         "value": "0"
	I1001 19:52:49.792551   48985 command_runner.go:130] >       },
	I1001 19:52:49.792555   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.792559   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.792563   48985 command_runner.go:130] >       "pinned": false
	I1001 19:52:49.792567   48985 command_runner.go:130] >     },
	I1001 19:52:49.792571   48985 command_runner.go:130] >     {
	I1001 19:52:49.792578   48985 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1001 19:52:49.792582   48985 command_runner.go:130] >       "repoTags": [
	I1001 19:52:49.792586   48985 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1001 19:52:49.792590   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792594   48985 command_runner.go:130] >       "repoDigests": [
	I1001 19:52:49.792603   48985 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1001 19:52:49.792612   48985 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1001 19:52:49.792617   48985 command_runner.go:130] >       ],
	I1001 19:52:49.792621   48985 command_runner.go:130] >       "size": "742080",
	I1001 19:52:49.792624   48985 command_runner.go:130] >       "uid": {
	I1001 19:52:49.792629   48985 command_runner.go:130] >         "value": "65535"
	I1001 19:52:49.792639   48985 command_runner.go:130] >       },
	I1001 19:52:49.792643   48985 command_runner.go:130] >       "username": "",
	I1001 19:52:49.792647   48985 command_runner.go:130] >       "spec": null,
	I1001 19:52:49.792651   48985 command_runner.go:130] >       "pinned": true
	I1001 19:52:49.792654   48985 command_runner.go:130] >     }
	I1001 19:52:49.792657   48985 command_runner.go:130] >   ]
	I1001 19:52:49.792660   48985 command_runner.go:130] > }
	I1001 19:52:49.792766   48985 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 19:52:49.792777   48985 cache_images.go:84] Images are preloaded, skipping loading
	I1001 19:52:49.792785   48985 kubeadm.go:934] updating node { 192.168.39.165 8443 v1.31.1 crio true true} ...
	I1001 19:52:49.792881   48985 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-325713 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-325713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 19:52:49.792939   48985 ssh_runner.go:195] Run: crio config
	I1001 19:52:49.826974   48985 command_runner.go:130] ! time="2024-10-01 19:52:49.808021555Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1001 19:52:49.832557   48985 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1001 19:52:49.837777   48985 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1001 19:52:49.837813   48985 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1001 19:52:49.837824   48985 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1001 19:52:49.837829   48985 command_runner.go:130] > #
	I1001 19:52:49.837838   48985 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1001 19:52:49.837847   48985 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1001 19:52:49.837859   48985 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1001 19:52:49.837875   48985 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1001 19:52:49.837884   48985 command_runner.go:130] > # reload'.
	I1001 19:52:49.837892   48985 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1001 19:52:49.837900   48985 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1001 19:52:49.837907   48985 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1001 19:52:49.837914   48985 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1001 19:52:49.837923   48985 command_runner.go:130] > [crio]
	I1001 19:52:49.837935   48985 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1001 19:52:49.837945   48985 command_runner.go:130] > # containers images, in this directory.
	I1001 19:52:49.837953   48985 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1001 19:52:49.837981   48985 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1001 19:52:49.837992   48985 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1001 19:52:49.838004   48985 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1001 19:52:49.838013   48985 command_runner.go:130] > # imagestore = ""
	I1001 19:52:49.838026   48985 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1001 19:52:49.838038   48985 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1001 19:52:49.838047   48985 command_runner.go:130] > storage_driver = "overlay"
	I1001 19:52:49.838060   48985 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1001 19:52:49.838071   48985 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1001 19:52:49.838079   48985 command_runner.go:130] > storage_option = [
	I1001 19:52:49.838086   48985 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1001 19:52:49.838090   48985 command_runner.go:130] > ]
	I1001 19:52:49.838098   48985 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1001 19:52:49.838106   48985 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1001 19:52:49.838112   48985 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1001 19:52:49.838117   48985 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1001 19:52:49.838125   48985 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1001 19:52:49.838137   48985 command_runner.go:130] > # always happen on a node reboot
	I1001 19:52:49.838147   48985 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1001 19:52:49.838168   48985 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1001 19:52:49.838180   48985 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1001 19:52:49.838191   48985 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1001 19:52:49.838199   48985 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1001 19:52:49.838213   48985 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1001 19:52:49.838232   48985 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1001 19:52:49.838241   48985 command_runner.go:130] > # internal_wipe = true
	I1001 19:52:49.838255   48985 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1001 19:52:49.838266   48985 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1001 19:52:49.838275   48985 command_runner.go:130] > # internal_repair = false
	I1001 19:52:49.838286   48985 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1001 19:52:49.838297   48985 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1001 19:52:49.838308   48985 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1001 19:52:49.838319   48985 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1001 19:52:49.838334   48985 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1001 19:52:49.838340   48985 command_runner.go:130] > [crio.api]
	I1001 19:52:49.838346   48985 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1001 19:52:49.838352   48985 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1001 19:52:49.838357   48985 command_runner.go:130] > # IP address on which the stream server will listen.
	I1001 19:52:49.838363   48985 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1001 19:52:49.838369   48985 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1001 19:52:49.838376   48985 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1001 19:52:49.838379   48985 command_runner.go:130] > # stream_port = "0"
	I1001 19:52:49.838386   48985 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1001 19:52:49.838393   48985 command_runner.go:130] > # stream_enable_tls = false
	I1001 19:52:49.838399   48985 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1001 19:52:49.838405   48985 command_runner.go:130] > # stream_idle_timeout = ""
	I1001 19:52:49.838411   48985 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1001 19:52:49.838419   48985 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1001 19:52:49.838424   48985 command_runner.go:130] > # minutes.
	I1001 19:52:49.838428   48985 command_runner.go:130] > # stream_tls_cert = ""
	I1001 19:52:49.838440   48985 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1001 19:52:49.838448   48985 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1001 19:52:49.838454   48985 command_runner.go:130] > # stream_tls_key = ""
	I1001 19:52:49.838460   48985 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1001 19:52:49.838467   48985 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1001 19:52:49.838489   48985 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1001 19:52:49.838495   48985 command_runner.go:130] > # stream_tls_ca = ""
	I1001 19:52:49.838503   48985 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1001 19:52:49.838509   48985 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1001 19:52:49.838516   48985 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1001 19:52:49.838523   48985 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1001 19:52:49.838529   48985 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1001 19:52:49.838536   48985 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1001 19:52:49.838540   48985 command_runner.go:130] > [crio.runtime]
	I1001 19:52:49.838546   48985 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1001 19:52:49.838552   48985 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1001 19:52:49.838558   48985 command_runner.go:130] > # "nofile=1024:2048"
	I1001 19:52:49.838564   48985 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1001 19:52:49.838570   48985 command_runner.go:130] > # default_ulimits = [
	I1001 19:52:49.838573   48985 command_runner.go:130] > # ]
	I1001 19:52:49.838579   48985 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1001 19:52:49.838585   48985 command_runner.go:130] > # no_pivot = false
	I1001 19:52:49.838593   48985 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1001 19:52:49.838600   48985 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1001 19:52:49.838607   48985 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1001 19:52:49.838613   48985 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1001 19:52:49.838623   48985 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1001 19:52:49.838630   48985 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1001 19:52:49.838636   48985 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1001 19:52:49.838640   48985 command_runner.go:130] > # Cgroup setting for conmon
	I1001 19:52:49.838649   48985 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1001 19:52:49.838653   48985 command_runner.go:130] > conmon_cgroup = "pod"
	I1001 19:52:49.838659   48985 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1001 19:52:49.838667   48985 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1001 19:52:49.838674   48985 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1001 19:52:49.838680   48985 command_runner.go:130] > conmon_env = [
	I1001 19:52:49.838689   48985 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1001 19:52:49.838694   48985 command_runner.go:130] > ]
	I1001 19:52:49.838699   48985 command_runner.go:130] > # Additional environment variables to set for all the
	I1001 19:52:49.838706   48985 command_runner.go:130] > # containers. These are overridden if set in the
	I1001 19:52:49.838711   48985 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1001 19:52:49.838717   48985 command_runner.go:130] > # default_env = [
	I1001 19:52:49.838720   48985 command_runner.go:130] > # ]
	I1001 19:52:49.838726   48985 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1001 19:52:49.838735   48985 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1001 19:52:49.838739   48985 command_runner.go:130] > # selinux = false
	I1001 19:52:49.838745   48985 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1001 19:52:49.838753   48985 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1001 19:52:49.838763   48985 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1001 19:52:49.838769   48985 command_runner.go:130] > # seccomp_profile = ""
	I1001 19:52:49.838774   48985 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1001 19:52:49.838782   48985 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1001 19:52:49.838787   48985 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1001 19:52:49.838794   48985 command_runner.go:130] > # which might increase security.
	I1001 19:52:49.838798   48985 command_runner.go:130] > # This option is currently deprecated,
	I1001 19:52:49.838806   48985 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1001 19:52:49.838813   48985 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1001 19:52:49.838819   48985 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1001 19:52:49.838827   48985 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1001 19:52:49.838837   48985 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1001 19:52:49.838852   48985 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1001 19:52:49.838857   48985 command_runner.go:130] > # This option supports live configuration reload.
	I1001 19:52:49.838864   48985 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1001 19:52:49.838869   48985 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1001 19:52:49.838879   48985 command_runner.go:130] > # the cgroup blockio controller.
	I1001 19:52:49.838883   48985 command_runner.go:130] > # blockio_config_file = ""
	I1001 19:52:49.838893   48985 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1001 19:52:49.838899   48985 command_runner.go:130] > # blockio parameters.
	I1001 19:52:49.838903   48985 command_runner.go:130] > # blockio_reload = false
	I1001 19:52:49.838911   48985 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1001 19:52:49.838917   48985 command_runner.go:130] > # irqbalance daemon.
	I1001 19:52:49.838922   48985 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1001 19:52:49.838930   48985 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1001 19:52:49.838937   48985 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1001 19:52:49.838945   48985 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1001 19:52:49.838953   48985 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1001 19:52:49.838961   48985 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1001 19:52:49.838968   48985 command_runner.go:130] > # This option supports live configuration reload.
	I1001 19:52:49.838972   48985 command_runner.go:130] > # rdt_config_file = ""
	I1001 19:52:49.838978   48985 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1001 19:52:49.838982   48985 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1001 19:52:49.838998   48985 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1001 19:52:49.839004   48985 command_runner.go:130] > # separate_pull_cgroup = ""
	I1001 19:52:49.839010   48985 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1001 19:52:49.839019   48985 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1001 19:52:49.839022   48985 command_runner.go:130] > # will be added.
	I1001 19:52:49.839029   48985 command_runner.go:130] > # default_capabilities = [
	I1001 19:52:49.839045   48985 command_runner.go:130] > # 	"CHOWN",
	I1001 19:52:49.839055   48985 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1001 19:52:49.839061   48985 command_runner.go:130] > # 	"FSETID",
	I1001 19:52:49.839065   48985 command_runner.go:130] > # 	"FOWNER",
	I1001 19:52:49.839070   48985 command_runner.go:130] > # 	"SETGID",
	I1001 19:52:49.839074   48985 command_runner.go:130] > # 	"SETUID",
	I1001 19:52:49.839080   48985 command_runner.go:130] > # 	"SETPCAP",
	I1001 19:52:49.839084   48985 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1001 19:52:49.839094   48985 command_runner.go:130] > # 	"KILL",
	I1001 19:52:49.839098   48985 command_runner.go:130] > # ]
	I1001 19:52:49.839107   48985 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1001 19:52:49.839113   48985 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1001 19:52:49.839125   48985 command_runner.go:130] > # add_inheritable_capabilities = false
	I1001 19:52:49.839134   48985 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1001 19:52:49.839143   48985 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1001 19:52:49.839152   48985 command_runner.go:130] > default_sysctls = [
	I1001 19:52:49.839161   48985 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1001 19:52:49.839168   48985 command_runner.go:130] > ]
	I1001 19:52:49.839177   48985 command_runner.go:130] > # List of devices on the host that a
	I1001 19:52:49.839189   48985 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1001 19:52:49.839198   48985 command_runner.go:130] > # allowed_devices = [
	I1001 19:52:49.839206   48985 command_runner.go:130] > # 	"/dev/fuse",
	I1001 19:52:49.839212   48985 command_runner.go:130] > # ]
	I1001 19:52:49.839222   48985 command_runner.go:130] > # List of additional devices. specified as
	I1001 19:52:49.839231   48985 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1001 19:52:49.839238   48985 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1001 19:52:49.839244   48985 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1001 19:52:49.839250   48985 command_runner.go:130] > # additional_devices = [
	I1001 19:52:49.839253   48985 command_runner.go:130] > # ]
	I1001 19:52:49.839260   48985 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1001 19:52:49.839264   48985 command_runner.go:130] > # cdi_spec_dirs = [
	I1001 19:52:49.839268   48985 command_runner.go:130] > # 	"/etc/cdi",
	I1001 19:52:49.839272   48985 command_runner.go:130] > # 	"/var/run/cdi",
	I1001 19:52:49.839278   48985 command_runner.go:130] > # ]
	I1001 19:52:49.839284   48985 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1001 19:52:49.839298   48985 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1001 19:52:49.839303   48985 command_runner.go:130] > # Defaults to false.
	I1001 19:52:49.839310   48985 command_runner.go:130] > # device_ownership_from_security_context = false
	I1001 19:52:49.839316   48985 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1001 19:52:49.839323   48985 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1001 19:52:49.839329   48985 command_runner.go:130] > # hooks_dir = [
	I1001 19:52:49.839334   48985 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1001 19:52:49.839339   48985 command_runner.go:130] > # ]
	I1001 19:52:49.839344   48985 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1001 19:52:49.839352   48985 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1001 19:52:49.839359   48985 command_runner.go:130] > # its default mounts from the following two files:
	I1001 19:52:49.839364   48985 command_runner.go:130] > #
	I1001 19:52:49.839370   48985 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1001 19:52:49.839378   48985 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1001 19:52:49.839386   48985 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1001 19:52:49.839389   48985 command_runner.go:130] > #
	I1001 19:52:49.839394   48985 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1001 19:52:49.839402   48985 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1001 19:52:49.839411   48985 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1001 19:52:49.839419   48985 command_runner.go:130] > #      only add mounts it finds in this file.
	I1001 19:52:49.839425   48985 command_runner.go:130] > #
	I1001 19:52:49.839429   48985 command_runner.go:130] > # default_mounts_file = ""
	I1001 19:52:49.839437   48985 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1001 19:52:49.839443   48985 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1001 19:52:49.839449   48985 command_runner.go:130] > pids_limit = 1024
	I1001 19:52:49.839456   48985 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1001 19:52:49.839463   48985 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1001 19:52:49.839472   48985 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1001 19:52:49.839479   48985 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1001 19:52:49.839485   48985 command_runner.go:130] > # log_size_max = -1
	I1001 19:52:49.839491   48985 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1001 19:52:49.839497   48985 command_runner.go:130] > # log_to_journald = false
	I1001 19:52:49.839503   48985 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1001 19:52:49.839509   48985 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1001 19:52:49.839514   48985 command_runner.go:130] > # Path to directory for container attach sockets.
	I1001 19:52:49.839520   48985 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1001 19:52:49.839531   48985 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1001 19:52:49.839535   48985 command_runner.go:130] > # bind_mount_prefix = ""
	I1001 19:52:49.839541   48985 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1001 19:52:49.839545   48985 command_runner.go:130] > # read_only = false
	I1001 19:52:49.839552   48985 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1001 19:52:49.839559   48985 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1001 19:52:49.839563   48985 command_runner.go:130] > # live configuration reload.
	I1001 19:52:49.839572   48985 command_runner.go:130] > # log_level = "info"
	I1001 19:52:49.839578   48985 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1001 19:52:49.839585   48985 command_runner.go:130] > # This option supports live configuration reload.
	I1001 19:52:49.839588   48985 command_runner.go:130] > # log_filter = ""
	I1001 19:52:49.839594   48985 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1001 19:52:49.839603   48985 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1001 19:52:49.839609   48985 command_runner.go:130] > # separated by comma.
	I1001 19:52:49.839616   48985 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1001 19:52:49.839622   48985 command_runner.go:130] > # uid_mappings = ""
	I1001 19:52:49.839627   48985 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1001 19:52:49.839635   48985 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1001 19:52:49.839647   48985 command_runner.go:130] > # separated by comma.
	I1001 19:52:49.839654   48985 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1001 19:52:49.839662   48985 command_runner.go:130] > # gid_mappings = ""
	I1001 19:52:49.839669   48985 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1001 19:52:49.839676   48985 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1001 19:52:49.839688   48985 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1001 19:52:49.839697   48985 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1001 19:52:49.839703   48985 command_runner.go:130] > # minimum_mappable_uid = -1
	I1001 19:52:49.839709   48985 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1001 19:52:49.839717   48985 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1001 19:52:49.839724   48985 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1001 19:52:49.839732   48985 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1001 19:52:49.839737   48985 command_runner.go:130] > # minimum_mappable_gid = -1
	I1001 19:52:49.839743   48985 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1001 19:52:49.839749   48985 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1001 19:52:49.839755   48985 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1001 19:52:49.839760   48985 command_runner.go:130] > # ctr_stop_timeout = 30
	I1001 19:52:49.839765   48985 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1001 19:52:49.839773   48985 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1001 19:52:49.839779   48985 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1001 19:52:49.839786   48985 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1001 19:52:49.839790   48985 command_runner.go:130] > drop_infra_ctr = false
	I1001 19:52:49.839798   48985 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1001 19:52:49.839805   48985 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1001 19:52:49.839812   48985 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1001 19:52:49.839818   48985 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1001 19:52:49.839826   48985 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1001 19:52:49.839835   48985 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1001 19:52:49.839846   48985 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1001 19:52:49.839852   48985 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1001 19:52:49.839857   48985 command_runner.go:130] > # shared_cpuset = ""
	I1001 19:52:49.839863   48985 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1001 19:52:49.839869   48985 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1001 19:52:49.839873   48985 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1001 19:52:49.839882   48985 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1001 19:52:49.839888   48985 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1001 19:52:49.839893   48985 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1001 19:52:49.839903   48985 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1001 19:52:49.839909   48985 command_runner.go:130] > # enable_criu_support = false
	I1001 19:52:49.839914   48985 command_runner.go:130] > # Enable/disable the generation of the container,
	I1001 19:52:49.839922   48985 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1001 19:52:49.839926   48985 command_runner.go:130] > # enable_pod_events = false
	I1001 19:52:49.839934   48985 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1001 19:52:49.839940   48985 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1001 19:52:49.839949   48985 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1001 19:52:49.839954   48985 command_runner.go:130] > # default_runtime = "runc"
	I1001 19:52:49.839960   48985 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1001 19:52:49.839968   48985 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1001 19:52:49.839979   48985 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1001 19:52:49.839986   48985 command_runner.go:130] > # creation as a file is not desired either.
	I1001 19:52:49.839994   48985 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1001 19:52:49.840000   48985 command_runner.go:130] > # the hostname is being managed dynamically.
	I1001 19:52:49.840005   48985 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1001 19:52:49.840010   48985 command_runner.go:130] > # ]
	I1001 19:52:49.840016   48985 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1001 19:52:49.840024   48985 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1001 19:52:49.840033   48985 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1001 19:52:49.840040   48985 command_runner.go:130] > # Each entry in the table should follow the format:
	I1001 19:52:49.840043   48985 command_runner.go:130] > #
	I1001 19:52:49.840048   48985 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1001 19:52:49.840055   48985 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1001 19:52:49.840073   48985 command_runner.go:130] > # runtime_type = "oci"
	I1001 19:52:49.840079   48985 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1001 19:52:49.840084   48985 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1001 19:52:49.840089   48985 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1001 19:52:49.840093   48985 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1001 19:52:49.840099   48985 command_runner.go:130] > # monitor_env = []
	I1001 19:52:49.840104   48985 command_runner.go:130] > # privileged_without_host_devices = false
	I1001 19:52:49.840110   48985 command_runner.go:130] > # allowed_annotations = []
	I1001 19:52:49.840115   48985 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1001 19:52:49.840121   48985 command_runner.go:130] > # Where:
	I1001 19:52:49.840126   48985 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1001 19:52:49.840134   48985 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1001 19:52:49.840143   48985 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1001 19:52:49.840155   48985 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1001 19:52:49.840167   48985 command_runner.go:130] > #   in $PATH.
	I1001 19:52:49.840179   48985 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1001 19:52:49.840190   48985 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1001 19:52:49.840202   48985 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1001 19:52:49.840211   48985 command_runner.go:130] > #   state.
	I1001 19:52:49.840223   48985 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1001 19:52:49.840235   48985 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1001 19:52:49.840244   48985 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1001 19:52:49.840249   48985 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1001 19:52:49.840255   48985 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1001 19:52:49.840264   48985 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1001 19:52:49.840278   48985 command_runner.go:130] > #   The currently recognized values are:
	I1001 19:52:49.840286   48985 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1001 19:52:49.840294   48985 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1001 19:52:49.840301   48985 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1001 19:52:49.840309   48985 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1001 19:52:49.840318   48985 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1001 19:52:49.840325   48985 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1001 19:52:49.840334   48985 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1001 19:52:49.840342   48985 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1001 19:52:49.840349   48985 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1001 19:52:49.840367   48985 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1001 19:52:49.840377   48985 command_runner.go:130] > #   deprecated option "conmon".
	I1001 19:52:49.840388   48985 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1001 19:52:49.840397   48985 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1001 19:52:49.840406   48985 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1001 19:52:49.840411   48985 command_runner.go:130] > #   should be moved to the container's cgroup
	I1001 19:52:49.840417   48985 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1001 19:52:49.840425   48985 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1001 19:52:49.840431   48985 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1001 19:52:49.840439   48985 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1001 19:52:49.840443   48985 command_runner.go:130] > #
	I1001 19:52:49.840449   48985 command_runner.go:130] > # Using the seccomp notifier feature:
	I1001 19:52:49.840456   48985 command_runner.go:130] > #
	I1001 19:52:49.840464   48985 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1001 19:52:49.840472   48985 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1001 19:52:49.840478   48985 command_runner.go:130] > #
	I1001 19:52:49.840483   48985 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1001 19:52:49.840496   48985 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1001 19:52:49.840501   48985 command_runner.go:130] > #
	I1001 19:52:49.840506   48985 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1001 19:52:49.840512   48985 command_runner.go:130] > # feature.
	I1001 19:52:49.840516   48985 command_runner.go:130] > #
	I1001 19:52:49.840524   48985 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1001 19:52:49.840529   48985 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1001 19:52:49.840537   48985 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1001 19:52:49.840545   48985 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1001 19:52:49.840553   48985 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1001 19:52:49.840558   48985 command_runner.go:130] > #
	I1001 19:52:49.840563   48985 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1001 19:52:49.840571   48985 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1001 19:52:49.840576   48985 command_runner.go:130] > #
	I1001 19:52:49.840582   48985 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1001 19:52:49.840589   48985 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1001 19:52:49.840592   48985 command_runner.go:130] > #
	I1001 19:52:49.840598   48985 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1001 19:52:49.840606   48985 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1001 19:52:49.840609   48985 command_runner.go:130] > # limitation.
	I1001 19:52:49.840616   48985 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1001 19:52:49.840623   48985 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1001 19:52:49.840626   48985 command_runner.go:130] > runtime_type = "oci"
	I1001 19:52:49.840632   48985 command_runner.go:130] > runtime_root = "/run/runc"
	I1001 19:52:49.840636   48985 command_runner.go:130] > runtime_config_path = ""
	I1001 19:52:49.840643   48985 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1001 19:52:49.840647   48985 command_runner.go:130] > monitor_cgroup = "pod"
	I1001 19:52:49.840653   48985 command_runner.go:130] > monitor_exec_cgroup = ""
	I1001 19:52:49.840657   48985 command_runner.go:130] > monitor_env = [
	I1001 19:52:49.840664   48985 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1001 19:52:49.840668   48985 command_runner.go:130] > ]
	I1001 19:52:49.840673   48985 command_runner.go:130] > privileged_without_host_devices = false
	I1001 19:52:49.840684   48985 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1001 19:52:49.840691   48985 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1001 19:52:49.840697   48985 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1001 19:52:49.840706   48985 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1001 19:52:49.840718   48985 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1001 19:52:49.840726   48985 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1001 19:52:49.840736   48985 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1001 19:52:49.840747   48985 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1001 19:52:49.840755   48985 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1001 19:52:49.840765   48985 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1001 19:52:49.840771   48985 command_runner.go:130] > # Example:
	I1001 19:52:49.840776   48985 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1001 19:52:49.840783   48985 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1001 19:52:49.840787   48985 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1001 19:52:49.840794   48985 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1001 19:52:49.840797   48985 command_runner.go:130] > # cpuset = 0
	I1001 19:52:49.840803   48985 command_runner.go:130] > # cpushares = "0-1"
	I1001 19:52:49.840806   48985 command_runner.go:130] > # Where:
	I1001 19:52:49.840813   48985 command_runner.go:130] > # The workload name is workload-type.
	I1001 19:52:49.840820   48985 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1001 19:52:49.840826   48985 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1001 19:52:49.840832   48985 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1001 19:52:49.840840   48985 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1001 19:52:49.840847   48985 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1001 19:52:49.840854   48985 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1001 19:52:49.840861   48985 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1001 19:52:49.840867   48985 command_runner.go:130] > # Default value is set to true
	I1001 19:52:49.840872   48985 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1001 19:52:49.840879   48985 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1001 19:52:49.840886   48985 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1001 19:52:49.840890   48985 command_runner.go:130] > # Default value is set to 'false'
	I1001 19:52:49.840896   48985 command_runner.go:130] > # disable_hostport_mapping = false
	I1001 19:52:49.840905   48985 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1001 19:52:49.840909   48985 command_runner.go:130] > #
	I1001 19:52:49.840914   48985 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1001 19:52:49.840920   48985 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1001 19:52:49.840925   48985 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1001 19:52:49.840930   48985 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1001 19:52:49.840937   48985 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1001 19:52:49.840940   48985 command_runner.go:130] > [crio.image]
	I1001 19:52:49.840945   48985 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1001 19:52:49.840949   48985 command_runner.go:130] > # default_transport = "docker://"
	I1001 19:52:49.840956   48985 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1001 19:52:49.840961   48985 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1001 19:52:49.840965   48985 command_runner.go:130] > # global_auth_file = ""
	I1001 19:52:49.840970   48985 command_runner.go:130] > # The image used to instantiate infra containers.
	I1001 19:52:49.840974   48985 command_runner.go:130] > # This option supports live configuration reload.
	I1001 19:52:49.840978   48985 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1001 19:52:49.840984   48985 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1001 19:52:49.840989   48985 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1001 19:52:49.840994   48985 command_runner.go:130] > # This option supports live configuration reload.
	I1001 19:52:49.840998   48985 command_runner.go:130] > # pause_image_auth_file = ""
	I1001 19:52:49.841003   48985 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1001 19:52:49.841009   48985 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1001 19:52:49.841014   48985 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1001 19:52:49.841019   48985 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1001 19:52:49.841022   48985 command_runner.go:130] > # pause_command = "/pause"
	I1001 19:52:49.841028   48985 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1001 19:52:49.841033   48985 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1001 19:52:49.841038   48985 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1001 19:52:49.841045   48985 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1001 19:52:49.841050   48985 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1001 19:52:49.841056   48985 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1001 19:52:49.841059   48985 command_runner.go:130] > # pinned_images = [
	I1001 19:52:49.841063   48985 command_runner.go:130] > # ]
	I1001 19:52:49.841070   48985 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1001 19:52:49.841076   48985 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1001 19:52:49.841085   48985 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1001 19:52:49.841093   48985 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1001 19:52:49.841097   48985 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1001 19:52:49.841104   48985 command_runner.go:130] > # signature_policy = ""
	I1001 19:52:49.841109   48985 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1001 19:52:49.841118   48985 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1001 19:52:49.841127   48985 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1001 19:52:49.841139   48985 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1001 19:52:49.841152   48985 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1001 19:52:49.841162   48985 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1001 19:52:49.841174   48985 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1001 19:52:49.841186   48985 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1001 19:52:49.841195   48985 command_runner.go:130] > # changing them here.
	I1001 19:52:49.841201   48985 command_runner.go:130] > # insecure_registries = [
	I1001 19:52:49.841209   48985 command_runner.go:130] > # ]
	I1001 19:52:49.841217   48985 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1001 19:52:49.841227   48985 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1001 19:52:49.841235   48985 command_runner.go:130] > # image_volumes = "mkdir"
	I1001 19:52:49.841247   48985 command_runner.go:130] > # Temporary directory to use for storing big files
	I1001 19:52:49.841254   48985 command_runner.go:130] > # big_files_temporary_dir = ""
	I1001 19:52:49.841260   48985 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1001 19:52:49.841266   48985 command_runner.go:130] > # CNI plugins.
	I1001 19:52:49.841270   48985 command_runner.go:130] > [crio.network]
	I1001 19:52:49.841278   48985 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1001 19:52:49.841285   48985 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1001 19:52:49.841289   48985 command_runner.go:130] > # cni_default_network = ""
	I1001 19:52:49.841297   48985 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1001 19:52:49.841303   48985 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1001 19:52:49.841308   48985 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1001 19:52:49.841314   48985 command_runner.go:130] > # plugin_dirs = [
	I1001 19:52:49.841318   48985 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1001 19:52:49.841324   48985 command_runner.go:130] > # ]
	I1001 19:52:49.841331   48985 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1001 19:52:49.841336   48985 command_runner.go:130] > [crio.metrics]
	I1001 19:52:49.841341   48985 command_runner.go:130] > # Globally enable or disable metrics support.
	I1001 19:52:49.841347   48985 command_runner.go:130] > enable_metrics = true
	I1001 19:52:49.841352   48985 command_runner.go:130] > # Specify enabled metrics collectors.
	I1001 19:52:49.841358   48985 command_runner.go:130] > # Per default all metrics are enabled.
	I1001 19:52:49.841364   48985 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1001 19:52:49.841372   48985 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1001 19:52:49.841380   48985 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1001 19:52:49.841387   48985 command_runner.go:130] > # metrics_collectors = [
	I1001 19:52:49.841391   48985 command_runner.go:130] > # 	"operations",
	I1001 19:52:49.841397   48985 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1001 19:52:49.841402   48985 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1001 19:52:49.841408   48985 command_runner.go:130] > # 	"operations_errors",
	I1001 19:52:49.841412   48985 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1001 19:52:49.841418   48985 command_runner.go:130] > # 	"image_pulls_by_name",
	I1001 19:52:49.841423   48985 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1001 19:52:49.841432   48985 command_runner.go:130] > # 	"image_pulls_failures",
	I1001 19:52:49.841439   48985 command_runner.go:130] > # 	"image_pulls_successes",
	I1001 19:52:49.841443   48985 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1001 19:52:49.841447   48985 command_runner.go:130] > # 	"image_layer_reuse",
	I1001 19:52:49.841454   48985 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1001 19:52:49.841458   48985 command_runner.go:130] > # 	"containers_oom_total",
	I1001 19:52:49.841462   48985 command_runner.go:130] > # 	"containers_oom",
	I1001 19:52:49.841467   48985 command_runner.go:130] > # 	"processes_defunct",
	I1001 19:52:49.841471   48985 command_runner.go:130] > # 	"operations_total",
	I1001 19:52:49.841477   48985 command_runner.go:130] > # 	"operations_latency_seconds",
	I1001 19:52:49.841481   48985 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1001 19:52:49.841487   48985 command_runner.go:130] > # 	"operations_errors_total",
	I1001 19:52:49.841491   48985 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1001 19:52:49.841497   48985 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1001 19:52:49.841501   48985 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1001 19:52:49.841507   48985 command_runner.go:130] > # 	"image_pulls_success_total",
	I1001 19:52:49.841512   48985 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1001 19:52:49.841518   48985 command_runner.go:130] > # 	"containers_oom_count_total",
	I1001 19:52:49.841522   48985 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1001 19:52:49.841528   48985 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1001 19:52:49.841531   48985 command_runner.go:130] > # ]
	I1001 19:52:49.841536   48985 command_runner.go:130] > # The port on which the metrics server will listen.
	I1001 19:52:49.841542   48985 command_runner.go:130] > # metrics_port = 9090
	I1001 19:52:49.841547   48985 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1001 19:52:49.841553   48985 command_runner.go:130] > # metrics_socket = ""
	I1001 19:52:49.841558   48985 command_runner.go:130] > # The certificate for the secure metrics server.
	I1001 19:52:49.841566   48985 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1001 19:52:49.841572   48985 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1001 19:52:49.841579   48985 command_runner.go:130] > # certificate on any modification event.
	I1001 19:52:49.841583   48985 command_runner.go:130] > # metrics_cert = ""
	I1001 19:52:49.841591   48985 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1001 19:52:49.841597   48985 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1001 19:52:49.841601   48985 command_runner.go:130] > # metrics_key = ""
	I1001 19:52:49.841608   48985 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1001 19:52:49.841612   48985 command_runner.go:130] > [crio.tracing]
	I1001 19:52:49.841617   48985 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1001 19:52:49.841623   48985 command_runner.go:130] > # enable_tracing = false
	I1001 19:52:49.841628   48985 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1001 19:52:49.841637   48985 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1001 19:52:49.841644   48985 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1001 19:52:49.841650   48985 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1001 19:52:49.841654   48985 command_runner.go:130] > # CRI-O NRI configuration.
	I1001 19:52:49.841657   48985 command_runner.go:130] > [crio.nri]
	I1001 19:52:49.841664   48985 command_runner.go:130] > # Globally enable or disable NRI.
	I1001 19:52:49.841668   48985 command_runner.go:130] > # enable_nri = false
	I1001 19:52:49.841676   48985 command_runner.go:130] > # NRI socket to listen on.
	I1001 19:52:49.841685   48985 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1001 19:52:49.841691   48985 command_runner.go:130] > # NRI plugin directory to use.
	I1001 19:52:49.841696   48985 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1001 19:52:49.841701   48985 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1001 19:52:49.841712   48985 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1001 19:52:49.841717   48985 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1001 19:52:49.841723   48985 command_runner.go:130] > # nri_disable_connections = false
	I1001 19:52:49.841728   48985 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1001 19:52:49.841742   48985 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1001 19:52:49.841747   48985 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1001 19:52:49.841754   48985 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1001 19:52:49.841759   48985 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1001 19:52:49.841765   48985 command_runner.go:130] > [crio.stats]
	I1001 19:52:49.841771   48985 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1001 19:52:49.841778   48985 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1001 19:52:49.841782   48985 command_runner.go:130] > # stats_collection_period = 0
	I1001 19:52:49.841892   48985 cni.go:84] Creating CNI manager for ""
	I1001 19:52:49.841906   48985 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1001 19:52:49.841914   48985 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 19:52:49.841941   48985 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.165 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-325713 NodeName:multinode-325713 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 19:52:49.842103   48985 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-325713"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 19:52:49.842168   48985 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 19:52:49.851776   48985 command_runner.go:130] > kubeadm
	I1001 19:52:49.851794   48985 command_runner.go:130] > kubectl
	I1001 19:52:49.851800   48985 command_runner.go:130] > kubelet
	I1001 19:52:49.851826   48985 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 19:52:49.851883   48985 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 19:52:49.860589   48985 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1001 19:52:49.876591   48985 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 19:52:49.891917   48985 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1001 19:52:49.907137   48985 ssh_runner.go:195] Run: grep 192.168.39.165	control-plane.minikube.internal$ /etc/hosts
	I1001 19:52:49.910877   48985 command_runner.go:130] > 192.168.39.165	control-plane.minikube.internal
	I1001 19:52:49.911001   48985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:52:50.042930   48985 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:52:50.056659   48985 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713 for IP: 192.168.39.165
	I1001 19:52:50.056696   48985 certs.go:194] generating shared ca certs ...
	I1001 19:52:50.056713   48985 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:52:50.056880   48985 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 19:52:50.056924   48985 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 19:52:50.056938   48985 certs.go:256] generating profile certs ...
	I1001 19:52:50.057020   48985 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/client.key
	I1001 19:52:50.057090   48985 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/apiserver.key.93594a76
	I1001 19:52:50.057131   48985 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/proxy-client.key
	I1001 19:52:50.057142   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 19:52:50.057159   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 19:52:50.057174   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 19:52:50.057187   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 19:52:50.057200   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 19:52:50.057214   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 19:52:50.057230   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 19:52:50.057244   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 19:52:50.057297   48985 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 19:52:50.057331   48985 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 19:52:50.057346   48985 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 19:52:50.057375   48985 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 19:52:50.057410   48985 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 19:52:50.057437   48985 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 19:52:50.057481   48985 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 19:52:50.057513   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /usr/share/ca-certificates/184302.pem
	I1001 19:52:50.057530   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:52:50.057546   48985 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem -> /usr/share/ca-certificates/18430.pem
	I1001 19:52:50.058101   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 19:52:50.082711   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 19:52:50.106754   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 19:52:50.129583   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 19:52:50.154005   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1001 19:52:50.178885   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 19:52:50.202452   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 19:52:50.226496   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/multinode-325713/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1001 19:52:50.250879   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 19:52:50.274976   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 19:52:50.299353   48985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 19:52:50.323050   48985 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 19:52:50.339273   48985 ssh_runner.go:195] Run: openssl version
	I1001 19:52:50.345153   48985 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1001 19:52:50.345226   48985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 19:52:50.355918   48985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 19:52:50.360545   48985 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 19:52:50.360619   48985 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 19:52:50.360680   48985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 19:52:50.366369   48985 command_runner.go:130] > 3ec20f2e
	I1001 19:52:50.366454   48985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 19:52:50.375538   48985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 19:52:50.385776   48985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:52:50.390018   48985 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:52:50.390042   48985 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:52:50.390076   48985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:52:50.395646   48985 command_runner.go:130] > b5213941
	I1001 19:52:50.395755   48985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 19:52:50.405126   48985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 19:52:50.415435   48985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 19:52:50.419669   48985 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 19:52:50.419690   48985 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 19:52:50.419727   48985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 19:52:50.424835   48985 command_runner.go:130] > 51391683
	I1001 19:52:50.425013   48985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 19:52:50.433789   48985 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 19:52:50.437744   48985 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 19:52:50.437773   48985 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1001 19:52:50.437780   48985 command_runner.go:130] > Device: 253,1	Inode: 9431080     Links: 1
	I1001 19:52:50.437786   48985 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1001 19:52:50.437796   48985 command_runner.go:130] > Access: 2024-10-01 19:45:59.439804893 +0000
	I1001 19:52:50.437804   48985 command_runner.go:130] > Modify: 2024-10-01 19:45:59.439804893 +0000
	I1001 19:52:50.437811   48985 command_runner.go:130] > Change: 2024-10-01 19:45:59.439804893 +0000
	I1001 19:52:50.437819   48985 command_runner.go:130] >  Birth: 2024-10-01 19:45:59.439804893 +0000
	I1001 19:52:50.437890   48985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 19:52:50.443078   48985 command_runner.go:130] > Certificate will not expire
	I1001 19:52:50.443147   48985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 19:52:50.448249   48985 command_runner.go:130] > Certificate will not expire
	I1001 19:52:50.448322   48985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 19:52:50.453288   48985 command_runner.go:130] > Certificate will not expire
	I1001 19:52:50.453485   48985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 19:52:50.458554   48985 command_runner.go:130] > Certificate will not expire
	I1001 19:52:50.458609   48985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 19:52:50.463627   48985 command_runner.go:130] > Certificate will not expire
	I1001 19:52:50.463726   48985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 19:52:50.469018   48985 command_runner.go:130] > Certificate will not expire
	I1001 19:52:50.469102   48985 kubeadm.go:392] StartCluster: {Name:multinode-325713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:multinode-325713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.61 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:52:50.469258   48985 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 19:52:50.469326   48985 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 19:52:50.504493   48985 command_runner.go:130] > 50abdc221179748e0e77361468d2966268a00dbb73094bf062c3fc4dd4abab85
	I1001 19:52:50.504525   48985 command_runner.go:130] > e73d14dc9d500ff21c9c06b2a020bc93776770a604b0f75cc50773c705cd6ff0
	I1001 19:52:50.504535   48985 command_runner.go:130] > 74cc8c8d45eb8e370db6e5a01ea691e69597f620ba5e51937ab8a6c8a4efb2ab
	I1001 19:52:50.504545   48985 command_runner.go:130] > c753d689839b591bdbd1e4b857d4fc8207b7f30ad75e876545e5c20f6aea9c17
	I1001 19:52:50.504554   48985 command_runner.go:130] > 99e0c7308d4811ed9a37c1503f0c4f652731732c727fddb6b2ca91a0d1d1207a
	I1001 19:52:50.504562   48985 command_runner.go:130] > b5825a9ff6472b047a265e5afd5811414044723e46816206e4beb6b418b69e24
	I1001 19:52:50.504567   48985 command_runner.go:130] > a87badf95fa60dbc3077c37ef50ce351665296ba8906532d97ef8dc1f7e01639
	I1001 19:52:50.504574   48985 command_runner.go:130] > 19d51ef666dc5924ed51fc7a56fd063604dfa6c7c7ed8f6f1edf773e2ddf803a
	I1001 19:52:50.504594   48985 cri.go:89] found id: "50abdc221179748e0e77361468d2966268a00dbb73094bf062c3fc4dd4abab85"
	I1001 19:52:50.504602   48985 cri.go:89] found id: "e73d14dc9d500ff21c9c06b2a020bc93776770a604b0f75cc50773c705cd6ff0"
	I1001 19:52:50.504605   48985 cri.go:89] found id: "74cc8c8d45eb8e370db6e5a01ea691e69597f620ba5e51937ab8a6c8a4efb2ab"
	I1001 19:52:50.504610   48985 cri.go:89] found id: "c753d689839b591bdbd1e4b857d4fc8207b7f30ad75e876545e5c20f6aea9c17"
	I1001 19:52:50.504613   48985 cri.go:89] found id: "99e0c7308d4811ed9a37c1503f0c4f652731732c727fddb6b2ca91a0d1d1207a"
	I1001 19:52:50.504618   48985 cri.go:89] found id: "b5825a9ff6472b047a265e5afd5811414044723e46816206e4beb6b418b69e24"
	I1001 19:52:50.504621   48985 cri.go:89] found id: "a87badf95fa60dbc3077c37ef50ce351665296ba8906532d97ef8dc1f7e01639"
	I1001 19:52:50.504624   48985 cri.go:89] found id: "19d51ef666dc5924ed51fc7a56fd063604dfa6c7c7ed8f6f1edf773e2ddf803a"
	I1001 19:52:50.504626   48985 cri.go:89] found id: ""
	I1001 19:52:50.504667   48985 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.745524623Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812622745497584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a4dcaa19-98cc-470e-8740-e218d49a4f6e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.746394856Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=12fb3b61-0702-4125-baf5-dfabff872973 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.746469116Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=12fb3b61-0702-4125-baf5-dfabff872973 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.747055939Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b834b2eb85399ce2fb868e88427aa76dca10dbdd8cbbaa50408427c4924cfc2,PodSandboxId:8d512f727350db7e42fb355890131b9202b3b5ac2f7cf97bb0ac0897743a2887,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727812411494846624,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nhjc5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10d9056-2747-4c16-be7f-478f969d5d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70eda3c33aa85b6d005a8045ca58d14983670f13a2f0a3403770d9d77d1eaf5,PodSandboxId:f6df19e7ef815784870ba6cfaa2a215f639a34f4ba4aa828afe952fa36f201ae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727812377889364661,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7kvjb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af66262-e3a8-4687-bf10-f7e139689769,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8622827e0bea2327d91748e496fa5a35b91539cac3bc3d17318689ecbf817385,PodSandboxId:7a04724b1fa98a29c2c22ae184bff58fbe3f0d94fd27dc3b3789b7be5c370477,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727812377961693569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swx5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa4c293-22e5-4a77-b851-3ae28e745a58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e133d880f95931cecea791c307dbd3b63126f009258e015eab481ab1ffde4c7a,PodSandboxId:81b1c36d12ae4bd3bf4d4982f3599008911582918553c0a746f84be09c849dc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727812377886814724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqznz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4551ba7-af95-4699-a104-1e0b0c3c965b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f406b30d4c63e456cd07ab7d97da4bf2e332b36fcc54315320d56f51c5399c,PodSandboxId:b10670eed02ee460bbe023ab5779a9f0a7aed0572e68bca0fec3438878b0a36e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727812377839104762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279c7942-da40-453f-b94f-2795b1c84c5d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8cca9641eaeaec6d2442941fde94017ec30b1be1fa78944aa88014145d48b14,PodSandboxId:d177cd495f846e32744ad856b6fc7972ac9d9a2642ad5545b96f957ef7b1f3ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727812372965848782,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b4c161593a5efe7a8c831c698c0fa1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91728fb10efec288907940bbd25068fbc59d3bf362903c3ee875cb425a7e9570,PodSandboxId:b9ca6a212d89c8fee5b39731beaa923e3c583d0537768e387c542d6f17a7845c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727812372994973249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054fcb6311ec0c9eed872f8380f10fef,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb81c99fa1c2e42286c70d520a49c2ad2ec6c2c8c728399216de29090137bf71,PodSandboxId:7590ee8b5f847ce6d77d4d8d1ae22ae7e1de9601c8c50fdce24c675a9303bffe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727812372902988561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14066c00a6e2b917f1e44bc7ec721b3b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:454294743e585aefdb165075a88ffc9546bf7dac940543d3064fa8252cfe5b64,PodSandboxId:e1b7e71701a6300d0368715b3ead5c3d45d5044a83e61dcf39e592d182fc1042,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727812372884212491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bf22a4e81d08ba90ba65b23ac7659,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36dd8c1d835614188d453bde713aaf3ead777013b290b859b9ef1cf875c1b685,PodSandboxId:0108e2f859c4b9e450abbb0dc80b3ea050d18785ce021d693ab87b230b013c18,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727812043159380149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nhjc5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10d9056-2747-4c16-be7f-478f969d5d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50abdc221179748e0e77361468d2966268a00dbb73094bf062c3fc4dd4abab85,PodSandboxId:9013eb36b71b5e1fe146ed5c7cacfd3d5fa4aac2a0073e7c062d23327122e28e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727811987017516787,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swx5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa4c293-22e5-4a77-b851-3ae28e745a58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73d14dc9d500ff21c9c06b2a020bc93776770a604b0f75cc50773c705cd6ff0,PodSandboxId:b9b43bf6e515ac84d92762f64983b1829820bf2bd6a095077cc936f208c9d88f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727811986960924497,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 279c7942-da40-453f-b94f-2795b1c84c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74cc8c8d45eb8e370db6e5a01ea691e69597f620ba5e51937ab8a6c8a4efb2ab,PodSandboxId:7f69035cb9fd7cd59575e995ecaf53d33d5b0cc28348f2994cc6d8258bbe1a39,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727811975043730765,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7kvjb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 3af66262-e3a8-4687-bf10-f7e139689769,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c753d689839b591bdbd1e4b857d4fc8207b7f30ad75e876545e5c20f6aea9c17,PodSandboxId:12f30e785442ae580bcbeff933862b69e54008da91a2c67f36e2a9d0c48d8e72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727811974820289353,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqznz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4551ba7-af95-4699-a104
-1e0b0c3c965b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99e0c7308d4811ed9a37c1503f0c4f652731732c727fddb6b2ca91a0d1d1207a,PodSandboxId:4f3f4a90ff8fba31bc0128beb7941fee07256d96ae2bb46791196434f6cc2a35,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727811963393478784,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054fcb6
311ec0c9eed872f8380f10fef,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a87badf95fa60dbc3077c37ef50ce351665296ba8906532d97ef8dc1f7e01639,PodSandboxId:c8353829f738e155c8a3fd6c5b006eca9c86c3471c91bda191a91b08ce182339,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727811963386369788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b4c161593a5efe7a8c831c698c0fa1,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5825a9ff6472b047a265e5afd5811414044723e46816206e4beb6b418b69e24,PodSandboxId:5790e9d0912b0802ba052d340247180ecd19df92be2648f40ae71124e5e27d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727811963393006733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14066c00a6e2b917f1e44bc7ec721b3b,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19d51ef666dc5924ed51fc7a56fd063604dfa6c7c7ed8f6f1edf773e2ddf803a,PodSandboxId:d1410a265d7f54ed665f318621cb9f3ed483ad896ddb5139c6fc994458d41b4a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727811963264479921,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bf22a4e81d08ba90ba65b23ac7659,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=12fb3b61-0702-4125-baf5-dfabff872973 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.794630868Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fe28258f-db33-4c53-8854-a5aa4d5d78f2 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.794732128Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe28258f-db33-4c53-8854-a5aa4d5d78f2 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.795914602Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2034af1-e83c-44d5-99a8-9b3515903c37 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.796902234Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812622796851356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2034af1-e83c-44d5-99a8-9b3515903c37 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.797546408Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e266ee82-d73d-4721-b566-4eb1bfd74425 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.797680750Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e266ee82-d73d-4721-b566-4eb1bfd74425 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.798022463Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b834b2eb85399ce2fb868e88427aa76dca10dbdd8cbbaa50408427c4924cfc2,PodSandboxId:8d512f727350db7e42fb355890131b9202b3b5ac2f7cf97bb0ac0897743a2887,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727812411494846624,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nhjc5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10d9056-2747-4c16-be7f-478f969d5d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70eda3c33aa85b6d005a8045ca58d14983670f13a2f0a3403770d9d77d1eaf5,PodSandboxId:f6df19e7ef815784870ba6cfaa2a215f639a34f4ba4aa828afe952fa36f201ae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727812377889364661,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7kvjb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af66262-e3a8-4687-bf10-f7e139689769,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8622827e0bea2327d91748e496fa5a35b91539cac3bc3d17318689ecbf817385,PodSandboxId:7a04724b1fa98a29c2c22ae184bff58fbe3f0d94fd27dc3b3789b7be5c370477,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727812377961693569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swx5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa4c293-22e5-4a77-b851-3ae28e745a58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e133d880f95931cecea791c307dbd3b63126f009258e015eab481ab1ffde4c7a,PodSandboxId:81b1c36d12ae4bd3bf4d4982f3599008911582918553c0a746f84be09c849dc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727812377886814724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqznz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4551ba7-af95-4699-a104-1e0b0c3c965b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f406b30d4c63e456cd07ab7d97da4bf2e332b36fcc54315320d56f51c5399c,PodSandboxId:b10670eed02ee460bbe023ab5779a9f0a7aed0572e68bca0fec3438878b0a36e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727812377839104762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279c7942-da40-453f-b94f-2795b1c84c5d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8cca9641eaeaec6d2442941fde94017ec30b1be1fa78944aa88014145d48b14,PodSandboxId:d177cd495f846e32744ad856b6fc7972ac9d9a2642ad5545b96f957ef7b1f3ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727812372965848782,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b4c161593a5efe7a8c831c698c0fa1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91728fb10efec288907940bbd25068fbc59d3bf362903c3ee875cb425a7e9570,PodSandboxId:b9ca6a212d89c8fee5b39731beaa923e3c583d0537768e387c542d6f17a7845c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727812372994973249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054fcb6311ec0c9eed872f8380f10fef,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb81c99fa1c2e42286c70d520a49c2ad2ec6c2c8c728399216de29090137bf71,PodSandboxId:7590ee8b5f847ce6d77d4d8d1ae22ae7e1de9601c8c50fdce24c675a9303bffe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727812372902988561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14066c00a6e2b917f1e44bc7ec721b3b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:454294743e585aefdb165075a88ffc9546bf7dac940543d3064fa8252cfe5b64,PodSandboxId:e1b7e71701a6300d0368715b3ead5c3d45d5044a83e61dcf39e592d182fc1042,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727812372884212491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bf22a4e81d08ba90ba65b23ac7659,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36dd8c1d835614188d453bde713aaf3ead777013b290b859b9ef1cf875c1b685,PodSandboxId:0108e2f859c4b9e450abbb0dc80b3ea050d18785ce021d693ab87b230b013c18,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727812043159380149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nhjc5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10d9056-2747-4c16-be7f-478f969d5d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50abdc221179748e0e77361468d2966268a00dbb73094bf062c3fc4dd4abab85,PodSandboxId:9013eb36b71b5e1fe146ed5c7cacfd3d5fa4aac2a0073e7c062d23327122e28e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727811987017516787,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swx5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa4c293-22e5-4a77-b851-3ae28e745a58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73d14dc9d500ff21c9c06b2a020bc93776770a604b0f75cc50773c705cd6ff0,PodSandboxId:b9b43bf6e515ac84d92762f64983b1829820bf2bd6a095077cc936f208c9d88f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727811986960924497,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 279c7942-da40-453f-b94f-2795b1c84c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74cc8c8d45eb8e370db6e5a01ea691e69597f620ba5e51937ab8a6c8a4efb2ab,PodSandboxId:7f69035cb9fd7cd59575e995ecaf53d33d5b0cc28348f2994cc6d8258bbe1a39,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727811975043730765,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7kvjb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 3af66262-e3a8-4687-bf10-f7e139689769,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c753d689839b591bdbd1e4b857d4fc8207b7f30ad75e876545e5c20f6aea9c17,PodSandboxId:12f30e785442ae580bcbeff933862b69e54008da91a2c67f36e2a9d0c48d8e72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727811974820289353,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqznz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4551ba7-af95-4699-a104
-1e0b0c3c965b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99e0c7308d4811ed9a37c1503f0c4f652731732c727fddb6b2ca91a0d1d1207a,PodSandboxId:4f3f4a90ff8fba31bc0128beb7941fee07256d96ae2bb46791196434f6cc2a35,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727811963393478784,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054fcb6
311ec0c9eed872f8380f10fef,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a87badf95fa60dbc3077c37ef50ce351665296ba8906532d97ef8dc1f7e01639,PodSandboxId:c8353829f738e155c8a3fd6c5b006eca9c86c3471c91bda191a91b08ce182339,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727811963386369788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b4c161593a5efe7a8c831c698c0fa1,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5825a9ff6472b047a265e5afd5811414044723e46816206e4beb6b418b69e24,PodSandboxId:5790e9d0912b0802ba052d340247180ecd19df92be2648f40ae71124e5e27d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727811963393006733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14066c00a6e2b917f1e44bc7ec721b3b,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19d51ef666dc5924ed51fc7a56fd063604dfa6c7c7ed8f6f1edf773e2ddf803a,PodSandboxId:d1410a265d7f54ed665f318621cb9f3ed483ad896ddb5139c6fc994458d41b4a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727811963264479921,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bf22a4e81d08ba90ba65b23ac7659,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e266ee82-d73d-4721-b566-4eb1bfd74425 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.845630789Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=67503e0a-82b7-471f-899e-8e5205ff5354 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.845711541Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=67503e0a-82b7-471f-899e-8e5205ff5354 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.847380079Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a654b6da-8db1-4c19-9802-1b6c88dfbda3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.847975000Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812622847948715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a654b6da-8db1-4c19-9802-1b6c88dfbda3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.848733073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1161f7bc-b713-4440-8956-829557121d85 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.848801829Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1161f7bc-b713-4440-8956-829557121d85 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.849232042Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b834b2eb85399ce2fb868e88427aa76dca10dbdd8cbbaa50408427c4924cfc2,PodSandboxId:8d512f727350db7e42fb355890131b9202b3b5ac2f7cf97bb0ac0897743a2887,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727812411494846624,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nhjc5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10d9056-2747-4c16-be7f-478f969d5d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70eda3c33aa85b6d005a8045ca58d14983670f13a2f0a3403770d9d77d1eaf5,PodSandboxId:f6df19e7ef815784870ba6cfaa2a215f639a34f4ba4aa828afe952fa36f201ae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727812377889364661,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7kvjb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af66262-e3a8-4687-bf10-f7e139689769,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8622827e0bea2327d91748e496fa5a35b91539cac3bc3d17318689ecbf817385,PodSandboxId:7a04724b1fa98a29c2c22ae184bff58fbe3f0d94fd27dc3b3789b7be5c370477,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727812377961693569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swx5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa4c293-22e5-4a77-b851-3ae28e745a58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e133d880f95931cecea791c307dbd3b63126f009258e015eab481ab1ffde4c7a,PodSandboxId:81b1c36d12ae4bd3bf4d4982f3599008911582918553c0a746f84be09c849dc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727812377886814724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqznz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4551ba7-af95-4699-a104-1e0b0c3c965b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f406b30d4c63e456cd07ab7d97da4bf2e332b36fcc54315320d56f51c5399c,PodSandboxId:b10670eed02ee460bbe023ab5779a9f0a7aed0572e68bca0fec3438878b0a36e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727812377839104762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279c7942-da40-453f-b94f-2795b1c84c5d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8cca9641eaeaec6d2442941fde94017ec30b1be1fa78944aa88014145d48b14,PodSandboxId:d177cd495f846e32744ad856b6fc7972ac9d9a2642ad5545b96f957ef7b1f3ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727812372965848782,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b4c161593a5efe7a8c831c698c0fa1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91728fb10efec288907940bbd25068fbc59d3bf362903c3ee875cb425a7e9570,PodSandboxId:b9ca6a212d89c8fee5b39731beaa923e3c583d0537768e387c542d6f17a7845c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727812372994973249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054fcb6311ec0c9eed872f8380f10fef,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb81c99fa1c2e42286c70d520a49c2ad2ec6c2c8c728399216de29090137bf71,PodSandboxId:7590ee8b5f847ce6d77d4d8d1ae22ae7e1de9601c8c50fdce24c675a9303bffe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727812372902988561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14066c00a6e2b917f1e44bc7ec721b3b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:454294743e585aefdb165075a88ffc9546bf7dac940543d3064fa8252cfe5b64,PodSandboxId:e1b7e71701a6300d0368715b3ead5c3d45d5044a83e61dcf39e592d182fc1042,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727812372884212491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bf22a4e81d08ba90ba65b23ac7659,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36dd8c1d835614188d453bde713aaf3ead777013b290b859b9ef1cf875c1b685,PodSandboxId:0108e2f859c4b9e450abbb0dc80b3ea050d18785ce021d693ab87b230b013c18,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727812043159380149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nhjc5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10d9056-2747-4c16-be7f-478f969d5d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50abdc221179748e0e77361468d2966268a00dbb73094bf062c3fc4dd4abab85,PodSandboxId:9013eb36b71b5e1fe146ed5c7cacfd3d5fa4aac2a0073e7c062d23327122e28e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727811987017516787,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swx5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa4c293-22e5-4a77-b851-3ae28e745a58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73d14dc9d500ff21c9c06b2a020bc93776770a604b0f75cc50773c705cd6ff0,PodSandboxId:b9b43bf6e515ac84d92762f64983b1829820bf2bd6a095077cc936f208c9d88f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727811986960924497,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 279c7942-da40-453f-b94f-2795b1c84c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74cc8c8d45eb8e370db6e5a01ea691e69597f620ba5e51937ab8a6c8a4efb2ab,PodSandboxId:7f69035cb9fd7cd59575e995ecaf53d33d5b0cc28348f2994cc6d8258bbe1a39,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727811975043730765,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7kvjb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 3af66262-e3a8-4687-bf10-f7e139689769,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c753d689839b591bdbd1e4b857d4fc8207b7f30ad75e876545e5c20f6aea9c17,PodSandboxId:12f30e785442ae580bcbeff933862b69e54008da91a2c67f36e2a9d0c48d8e72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727811974820289353,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqznz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4551ba7-af95-4699-a104
-1e0b0c3c965b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99e0c7308d4811ed9a37c1503f0c4f652731732c727fddb6b2ca91a0d1d1207a,PodSandboxId:4f3f4a90ff8fba31bc0128beb7941fee07256d96ae2bb46791196434f6cc2a35,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727811963393478784,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054fcb6
311ec0c9eed872f8380f10fef,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a87badf95fa60dbc3077c37ef50ce351665296ba8906532d97ef8dc1f7e01639,PodSandboxId:c8353829f738e155c8a3fd6c5b006eca9c86c3471c91bda191a91b08ce182339,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727811963386369788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b4c161593a5efe7a8c831c698c0fa1,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5825a9ff6472b047a265e5afd5811414044723e46816206e4beb6b418b69e24,PodSandboxId:5790e9d0912b0802ba052d340247180ecd19df92be2648f40ae71124e5e27d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727811963393006733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14066c00a6e2b917f1e44bc7ec721b3b,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19d51ef666dc5924ed51fc7a56fd063604dfa6c7c7ed8f6f1edf773e2ddf803a,PodSandboxId:d1410a265d7f54ed665f318621cb9f3ed483ad896ddb5139c6fc994458d41b4a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727811963264479921,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bf22a4e81d08ba90ba65b23ac7659,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1161f7bc-b713-4440-8956-829557121d85 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.892357512Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=959326e7-e553-491c-93ef-ad71e9a1b231 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.892435576Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=959326e7-e553-491c-93ef-ad71e9a1b231 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.893809032Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b401fc7c-a896-4cad-95c7-6403a51eb179 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.894217018Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812622894192569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b401fc7c-a896-4cad-95c7-6403a51eb179 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.894684489Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=78b15107-a688-43f5-8a63-77cadae6c0fe name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.894756556Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=78b15107-a688-43f5-8a63-77cadae6c0fe name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:57:02 multinode-325713 crio[2688]: time="2024-10-01 19:57:02.895477503Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b834b2eb85399ce2fb868e88427aa76dca10dbdd8cbbaa50408427c4924cfc2,PodSandboxId:8d512f727350db7e42fb355890131b9202b3b5ac2f7cf97bb0ac0897743a2887,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727812411494846624,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nhjc5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10d9056-2747-4c16-be7f-478f969d5d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70eda3c33aa85b6d005a8045ca58d14983670f13a2f0a3403770d9d77d1eaf5,PodSandboxId:f6df19e7ef815784870ba6cfaa2a215f639a34f4ba4aa828afe952fa36f201ae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727812377889364661,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7kvjb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af66262-e3a8-4687-bf10-f7e139689769,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8622827e0bea2327d91748e496fa5a35b91539cac3bc3d17318689ecbf817385,PodSandboxId:7a04724b1fa98a29c2c22ae184bff58fbe3f0d94fd27dc3b3789b7be5c370477,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727812377961693569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swx5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa4c293-22e5-4a77-b851-3ae28e745a58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e133d880f95931cecea791c307dbd3b63126f009258e015eab481ab1ffde4c7a,PodSandboxId:81b1c36d12ae4bd3bf4d4982f3599008911582918553c0a746f84be09c849dc0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727812377886814724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqznz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4551ba7-af95-4699-a104-1e0b0c3c965b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f406b30d4c63e456cd07ab7d97da4bf2e332b36fcc54315320d56f51c5399c,PodSandboxId:b10670eed02ee460bbe023ab5779a9f0a7aed0572e68bca0fec3438878b0a36e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727812377839104762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279c7942-da40-453f-b94f-2795b1c84c5d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8cca9641eaeaec6d2442941fde94017ec30b1be1fa78944aa88014145d48b14,PodSandboxId:d177cd495f846e32744ad856b6fc7972ac9d9a2642ad5545b96f957ef7b1f3ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727812372965848782,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b4c161593a5efe7a8c831c698c0fa1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91728fb10efec288907940bbd25068fbc59d3bf362903c3ee875cb425a7e9570,PodSandboxId:b9ca6a212d89c8fee5b39731beaa923e3c583d0537768e387c542d6f17a7845c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727812372994973249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054fcb6311ec0c9eed872f8380f10fef,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb81c99fa1c2e42286c70d520a49c2ad2ec6c2c8c728399216de29090137bf71,PodSandboxId:7590ee8b5f847ce6d77d4d8d1ae22ae7e1de9601c8c50fdce24c675a9303bffe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727812372902988561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14066c00a6e2b917f1e44bc7ec721b3b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:454294743e585aefdb165075a88ffc9546bf7dac940543d3064fa8252cfe5b64,PodSandboxId:e1b7e71701a6300d0368715b3ead5c3d45d5044a83e61dcf39e592d182fc1042,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727812372884212491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bf22a4e81d08ba90ba65b23ac7659,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36dd8c1d835614188d453bde713aaf3ead777013b290b859b9ef1cf875c1b685,PodSandboxId:0108e2f859c4b9e450abbb0dc80b3ea050d18785ce021d693ab87b230b013c18,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727812043159380149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nhjc5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10d9056-2747-4c16-be7f-478f969d5d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50abdc221179748e0e77361468d2966268a00dbb73094bf062c3fc4dd4abab85,PodSandboxId:9013eb36b71b5e1fe146ed5c7cacfd3d5fa4aac2a0073e7c062d23327122e28e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727811987017516787,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-swx5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa4c293-22e5-4a77-b851-3ae28e745a58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73d14dc9d500ff21c9c06b2a020bc93776770a604b0f75cc50773c705cd6ff0,PodSandboxId:b9b43bf6e515ac84d92762f64983b1829820bf2bd6a095077cc936f208c9d88f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727811986960924497,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 279c7942-da40-453f-b94f-2795b1c84c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74cc8c8d45eb8e370db6e5a01ea691e69597f620ba5e51937ab8a6c8a4efb2ab,PodSandboxId:7f69035cb9fd7cd59575e995ecaf53d33d5b0cc28348f2994cc6d8258bbe1a39,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727811975043730765,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7kvjb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 3af66262-e3a8-4687-bf10-f7e139689769,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c753d689839b591bdbd1e4b857d4fc8207b7f30ad75e876545e5c20f6aea9c17,PodSandboxId:12f30e785442ae580bcbeff933862b69e54008da91a2c67f36e2a9d0c48d8e72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727811974820289353,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wqznz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4551ba7-af95-4699-a104
-1e0b0c3c965b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99e0c7308d4811ed9a37c1503f0c4f652731732c727fddb6b2ca91a0d1d1207a,PodSandboxId:4f3f4a90ff8fba31bc0128beb7941fee07256d96ae2bb46791196434f6cc2a35,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727811963393478784,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054fcb6
311ec0c9eed872f8380f10fef,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a87badf95fa60dbc3077c37ef50ce351665296ba8906532d97ef8dc1f7e01639,PodSandboxId:c8353829f738e155c8a3fd6c5b006eca9c86c3471c91bda191a91b08ce182339,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727811963386369788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b4c161593a5efe7a8c831c698c0fa1,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5825a9ff6472b047a265e5afd5811414044723e46816206e4beb6b418b69e24,PodSandboxId:5790e9d0912b0802ba052d340247180ecd19df92be2648f40ae71124e5e27d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727811963393006733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14066c00a6e2b917f1e44bc7ec721b3b,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19d51ef666dc5924ed51fc7a56fd063604dfa6c7c7ed8f6f1edf773e2ddf803a,PodSandboxId:d1410a265d7f54ed665f318621cb9f3ed483ad896ddb5139c6fc994458d41b4a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727811963264479921,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-325713,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bf22a4e81d08ba90ba65b23ac7659,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=78b15107-a688-43f5-8a63-77cadae6c0fe name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6b834b2eb8539       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   8d512f727350d       busybox-7dff88458-nhjc5
	8622827e0bea2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   7a04724b1fa98       coredns-7c65d6cfc9-swx5f
	e70eda3c33aa8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   f6df19e7ef815       kindnet-7kvjb
	e133d880f9593       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   81b1c36d12ae4       kube-proxy-wqznz
	b8f406b30d4c6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   b10670eed02ee       storage-provisioner
	91728fb10efec       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   b9ca6a212d89c       kube-controller-manager-multinode-325713
	e8cca9641eaea       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   d177cd495f846       etcd-multinode-325713
	cb81c99fa1c2e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   7590ee8b5f847       kube-scheduler-multinode-325713
	454294743e585       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   e1b7e71701a63       kube-apiserver-multinode-325713
	36dd8c1d83561       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   0108e2f859c4b       busybox-7dff88458-nhjc5
	50abdc2211797       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago      Exited              coredns                   0                   9013eb36b71b5       coredns-7c65d6cfc9-swx5f
	e73d14dc9d500       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   b9b43bf6e515a       storage-provisioner
	74cc8c8d45eb8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   7f69035cb9fd7       kindnet-7kvjb
	c753d689839b5       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   12f30e785442a       kube-proxy-wqznz
	99e0c7308d481       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      10 minutes ago      Exited              kube-controller-manager   0                   4f3f4a90ff8fb       kube-controller-manager-multinode-325713
	b5825a9ff6472       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      10 minutes ago      Exited              kube-scheduler            0                   5790e9d0912b0       kube-scheduler-multinode-325713
	a87badf95fa60       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   c8353829f738e       etcd-multinode-325713
	19d51ef666dc5       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      10 minutes ago      Exited              kube-apiserver            0                   d1410a265d7f5       kube-apiserver-multinode-325713
	
	
	==> coredns [50abdc221179748e0e77361468d2966268a00dbb73094bf062c3fc4dd4abab85] <==
	[INFO] 10.244.0.3:34664 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001904636s
	[INFO] 10.244.0.3:53084 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000142914s
	[INFO] 10.244.0.3:53976 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111019s
	[INFO] 10.244.0.3:45703 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001228243s
	[INFO] 10.244.0.3:56693 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091392s
	[INFO] 10.244.0.3:46093 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000153855s
	[INFO] 10.244.0.3:46598 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007144s
	[INFO] 10.244.1.2:39262 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000299375s
	[INFO] 10.244.1.2:58993 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000171423s
	[INFO] 10.244.1.2:33484 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000217752s
	[INFO] 10.244.1.2:48567 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000152447s
	[INFO] 10.244.0.3:42810 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203058s
	[INFO] 10.244.0.3:39523 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103125s
	[INFO] 10.244.0.3:58960 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158691s
	[INFO] 10.244.0.3:56682 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092203s
	[INFO] 10.244.1.2:54920 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221604s
	[INFO] 10.244.1.2:42519 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00019814s
	[INFO] 10.244.1.2:59332 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0002861s
	[INFO] 10.244.1.2:36941 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000168401s
	[INFO] 10.244.0.3:60260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000173387s
	[INFO] 10.244.0.3:34031 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000091425s
	[INFO] 10.244.0.3:48273 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082832s
	[INFO] 10.244.0.3:50031 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000064227s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8622827e0bea2327d91748e496fa5a35b91539cac3bc3d17318689ecbf817385] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49996 - 391 "HINFO IN 3521685697945954381.4462412365812783941. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012983011s
	
	
	==> describe nodes <==
	Name:               multinode-325713
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-325713
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=multinode-325713
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T19_46_10_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:46:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-325713
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:57:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:52:56 +0000   Tue, 01 Oct 2024 19:46:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:52:56 +0000   Tue, 01 Oct 2024 19:46:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:52:56 +0000   Tue, 01 Oct 2024 19:46:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:52:56 +0000   Tue, 01 Oct 2024 19:46:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.165
	  Hostname:    multinode-325713
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8239c3eb3fc9460a961da917ebe46ad0
	  System UUID:                8239c3eb-3fc9-460a-961d-a917ebe46ad0
	  Boot ID:                    078d2ed7-8b7e-4053-8168-a2fd02e67089
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nhjc5                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m44s
	  kube-system                 coredns-7c65d6cfc9-swx5f                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-325713                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-7kvjb                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-325713             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-325713    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-wqznz                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-325713             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 4m4s                   kube-proxy       
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node multinode-325713 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node multinode-325713 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node multinode-325713 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-325713 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-325713 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-325713 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-325713 event: Registered Node multinode-325713 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-325713 status is now: NodeReady
	  Normal  Starting                 4m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m11s (x8 over 4m11s)  kubelet          Node multinode-325713 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s (x8 over 4m11s)  kubelet          Node multinode-325713 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s (x7 over 4m11s)  kubelet          Node multinode-325713 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s                   node-controller  Node multinode-325713 event: Registered Node multinode-325713 in Controller
	
	
	Name:               multinode-325713-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-325713-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=multinode-325713
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T19_53_36_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:53:35 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-325713-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:54:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 01 Oct 2024 19:54:06 +0000   Tue, 01 Oct 2024 19:55:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 01 Oct 2024 19:54:06 +0000   Tue, 01 Oct 2024 19:55:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 01 Oct 2024 19:54:06 +0000   Tue, 01 Oct 2024 19:55:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 01 Oct 2024 19:54:06 +0000   Tue, 01 Oct 2024 19:55:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    multinode-325713-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c04112f801c44849a3b43c67917acef8
	  System UUID:                c04112f8-01c4-4849-a3b4-3c67917acef8
	  Boot ID:                    1c3b35d5-7e26-42ef-a4a8-5dfd5f914bf4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lppvx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m33s
	  kube-system                 kindnet-h8ld7              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-kf9lq           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 3m22s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-325713-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-325713-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-325713-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                9m46s                  kubelet          Node multinode-325713-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m28s (x2 over 3m28s)  kubelet          Node multinode-325713-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m28s (x2 over 3m28s)  kubelet          Node multinode-325713-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m28s (x2 over 3m28s)  kubelet          Node multinode-325713-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m23s                  node-controller  Node multinode-325713-m02 event: Registered Node multinode-325713-m02 in Controller
	  Normal  NodeReady                3m8s                   kubelet          Node multinode-325713-m02 status is now: NodeReady
	  Normal  NodeNotReady             103s                   node-controller  Node multinode-325713-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.057422] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.179463] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.126943] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.280039] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +3.805720] systemd-fstab-generator[738]: Ignoring "noauto" option for root device
	[Oct 1 19:46] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.059319] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.994477] systemd-fstab-generator[1208]: Ignoring "noauto" option for root device
	[  +0.097991] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.132092] systemd-fstab-generator[1312]: Ignoring "noauto" option for root device
	[  +0.137973] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.491133] kauditd_printk_skb: 69 callbacks suppressed
	[Oct 1 19:47] kauditd_printk_skb: 12 callbacks suppressed
	[Oct 1 19:52] systemd-fstab-generator[2612]: Ignoring "noauto" option for root device
	[  +0.145055] systemd-fstab-generator[2624]: Ignoring "noauto" option for root device
	[  +0.181365] systemd-fstab-generator[2638]: Ignoring "noauto" option for root device
	[  +0.136701] systemd-fstab-generator[2650]: Ignoring "noauto" option for root device
	[  +0.279353] systemd-fstab-generator[2678]: Ignoring "noauto" option for root device
	[  +6.384532] systemd-fstab-generator[2773]: Ignoring "noauto" option for root device
	[  +0.082798] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.990543] systemd-fstab-generator[2893]: Ignoring "noauto" option for root device
	[  +5.716695] kauditd_printk_skb: 74 callbacks suppressed
	[Oct 1 19:53] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.093272] systemd-fstab-generator[3748]: Ignoring "noauto" option for root device
	[ +20.528732] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [a87badf95fa60dbc3077c37ef50ce351665296ba8906532d97ef8dc1f7e01639] <==
	{"level":"info","ts":"2024-10-01T19:46:04.447958Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T19:46:04.441276Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T19:46:04.444434Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T19:46:04.455762Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.165:2379"}
	{"level":"info","ts":"2024-10-01T19:46:04.472463Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-01T19:46:57.093437Z","caller":"traceutil/trace.go:171","msg":"trace[1354207169] transaction","detail":"{read_only:false; response_revision:474; number_of_response:1; }","duration":"231.607464ms","start":"2024-10-01T19:46:56.861803Z","end":"2024-10-01T19:46:57.093410Z","steps":["trace[1354207169] 'process raft request'  (duration: 227.227597ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T19:47:00.449273Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.528566ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15705901595387214331 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kindnet-h8ld7.17fa6be0ca6e9b9d\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kindnet-h8ld7.17fa6be0ca6e9b9d\" value_size:676 lease:6482529558532437523 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-10-01T19:47:00.449389Z","caller":"traceutil/trace.go:171","msg":"trace[306755950] transaction","detail":"{read_only:false; response_revision:507; number_of_response:1; }","duration":"173.637589ms","start":"2024-10-01T19:47:00.275735Z","end":"2024-10-01T19:47:00.449373Z","steps":["trace[306755950] 'process raft request'  (duration: 41.517342ms)","trace[306755950] 'compare'  (duration: 131.414208ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-01T19:47:54.563676Z","caller":"traceutil/trace.go:171","msg":"trace[1298207694] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"221.532393ms","start":"2024-10-01T19:47:54.342128Z","end":"2024-10-01T19:47:54.563660Z","steps":["trace[1298207694] 'process raft request'  (duration: 221.306374ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T19:47:54.563644Z","caller":"traceutil/trace.go:171","msg":"trace[536335664] linearizableReadLoop","detail":"{readStateIndex:642; appliedIndex:641; }","duration":"197.941427ms","start":"2024-10-01T19:47:54.365680Z","end":"2024-10-01T19:47:54.563621Z","steps":["trace[536335664] 'read index received'  (duration: 197.720553ms)","trace[536335664] 'applied index is now lower than readState.Index'  (duration: 219.861µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-01T19:47:54.563834Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.082016ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-325713-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T19:47:54.564075Z","caller":"traceutil/trace.go:171","msg":"trace[220120163] range","detail":"{range_begin:/registry/minions/multinode-325713-m03; range_end:; response_count:0; response_revision:610; }","duration":"198.379891ms","start":"2024-10-01T19:47:54.365674Z","end":"2024-10-01T19:47:54.564054Z","steps":["trace[220120163] 'agreement among raft nodes before linearized reading'  (duration: 198.045993ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T19:47:57.764246Z","caller":"traceutil/trace.go:171","msg":"trace[1694425089] transaction","detail":"{read_only:false; response_revision:637; number_of_response:1; }","duration":"169.896676ms","start":"2024-10-01T19:47:57.594331Z","end":"2024-10-01T19:47:57.764228Z","steps":["trace[1694425089] 'process raft request'  (duration: 169.759607ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T19:48:03.643460Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.809709ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15705901595387214904 > lease_revoke:<id:59f692499e4bbb8c>","response":"size:28"}
	{"level":"info","ts":"2024-10-01T19:48:50.885821Z","caller":"traceutil/trace.go:171","msg":"trace[97414680] transaction","detail":"{read_only:false; response_revision:740; number_of_response:1; }","duration":"171.598158ms","start":"2024-10-01T19:48:50.714163Z","end":"2024-10-01T19:48:50.885762Z","steps":["trace[97414680] 'process raft request'  (duration: 171.168601ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T19:51:11.644235Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-01T19:51:11.644349Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-325713","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"]}
	{"level":"warn","ts":"2024-10-01T19:51:11.644471Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-01T19:51:11.644647Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-01T19:51:11.721915Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.165:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-01T19:51:11.721973Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.165:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-01T19:51:11.723774Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ffc3b7517aaad9f6","current-leader-member-id":"ffc3b7517aaad9f6"}
	{"level":"info","ts":"2024-10-01T19:51:11.726502Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-10-01T19:51:11.726797Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-10-01T19:51:11.726892Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-325713","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"]}
	
	
	==> etcd [e8cca9641eaeaec6d2442941fde94017ec30b1be1fa78944aa88014145d48b14] <==
	{"level":"info","ts":"2024-10-01T19:52:53.571717Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"58f0a6b9f17e1f60","local-member-id":"ffc3b7517aaad9f6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T19:52:53.571759Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T19:52:53.587369Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-01T19:52:53.587682Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ffc3b7517aaad9f6","initial-advertise-peer-urls":["https://192.168.39.165:2380"],"listen-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.165:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-01T19:52:53.587720Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-01T19:52:53.587815Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-10-01T19:52:53.587834Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-10-01T19:52:55.384683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-01T19:52:55.384875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-01T19:52:55.384945Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 received MsgPreVoteResp from ffc3b7517aaad9f6 at term 2"}
	{"level":"info","ts":"2024-10-01T19:52:55.384985Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became candidate at term 3"}
	{"level":"info","ts":"2024-10-01T19:52:55.385010Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 received MsgVoteResp from ffc3b7517aaad9f6 at term 3"}
	{"level":"info","ts":"2024-10-01T19:52:55.385038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became leader at term 3"}
	{"level":"info","ts":"2024-10-01T19:52:55.385064Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ffc3b7517aaad9f6 elected leader ffc3b7517aaad9f6 at term 3"}
	{"level":"info","ts":"2024-10-01T19:52:55.393906Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T19:52:55.393860Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ffc3b7517aaad9f6","local-member-attributes":"{Name:multinode-325713 ClientURLs:[https://192.168.39.165:2379]}","request-path":"/0/members/ffc3b7517aaad9f6/attributes","cluster-id":"58f0a6b9f17e1f60","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-01T19:52:55.394888Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T19:52:55.395158Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-01T19:52:55.395186Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-01T19:52:55.395314Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T19:52:55.395746Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T19:52:55.396557Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.165:2379"}
	{"level":"info","ts":"2024-10-01T19:52:55.397809Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-01T19:53:40.220136Z","caller":"traceutil/trace.go:171","msg":"trace[1499003064] transaction","detail":"{read_only:false; response_revision:1076; number_of_response:1; }","duration":"171.764299ms","start":"2024-10-01T19:53:40.048337Z","end":"2024-10-01T19:53:40.220101Z","steps":["trace[1499003064] 'process raft request'  (duration: 171.623621ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T19:54:18.072134Z","caller":"traceutil/trace.go:171","msg":"trace[1789535552] transaction","detail":"{read_only:false; response_revision:1168; number_of_response:1; }","duration":"109.263722ms","start":"2024-10-01T19:54:17.962855Z","end":"2024-10-01T19:54:18.072119Z","steps":["trace[1789535552] 'process raft request'  (duration: 109.140663ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:57:03 up 11 min,  0 users,  load average: 0.51, 0.32, 0.15
	Linux multinode-325713 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [74cc8c8d45eb8e370db6e5a01ea691e69597f620ba5e51937ab8a6c8a4efb2ab] <==
	I1001 19:50:26.031478       1 main.go:322] Node multinode-325713-m03 has CIDR [10.244.3.0/24] 
	I1001 19:50:36.025474       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I1001 19:50:36.025525       1 main.go:299] handling current node
	I1001 19:50:36.025540       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1001 19:50:36.025546       1 main.go:322] Node multinode-325713-m02 has CIDR [10.244.1.0/24] 
	I1001 19:50:36.025729       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I1001 19:50:36.025748       1 main.go:322] Node multinode-325713-m03 has CIDR [10.244.3.0/24] 
	I1001 19:50:46.029761       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I1001 19:50:46.029887       1 main.go:322] Node multinode-325713-m03 has CIDR [10.244.3.0/24] 
	I1001 19:50:46.030059       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I1001 19:50:46.030083       1 main.go:299] handling current node
	I1001 19:50:46.030105       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1001 19:50:46.030121       1 main.go:322] Node multinode-325713-m02 has CIDR [10.244.1.0/24] 
	I1001 19:50:56.030801       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I1001 19:50:56.031048       1 main.go:299] handling current node
	I1001 19:50:56.031086       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1001 19:50:56.031110       1 main.go:322] Node multinode-325713-m02 has CIDR [10.244.1.0/24] 
	I1001 19:50:56.031347       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I1001 19:50:56.031378       1 main.go:322] Node multinode-325713-m03 has CIDR [10.244.3.0/24] 
	I1001 19:51:06.033064       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I1001 19:51:06.033187       1 main.go:299] handling current node
	I1001 19:51:06.033227       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1001 19:51:06.033233       1 main.go:322] Node multinode-325713-m02 has CIDR [10.244.1.0/24] 
	I1001 19:51:06.033380       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I1001 19:51:06.033403       1 main.go:322] Node multinode-325713-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [e70eda3c33aa85b6d005a8045ca58d14983670f13a2f0a3403770d9d77d1eaf5] <==
	I1001 19:55:58.828140       1 main.go:322] Node multinode-325713-m02 has CIDR [10.244.1.0/24] 
	I1001 19:56:08.835892       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I1001 19:56:08.835937       1 main.go:299] handling current node
	I1001 19:56:08.835952       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1001 19:56:08.835957       1 main.go:322] Node multinode-325713-m02 has CIDR [10.244.1.0/24] 
	I1001 19:56:18.827243       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I1001 19:56:18.828177       1 main.go:299] handling current node
	I1001 19:56:18.828253       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1001 19:56:18.828283       1 main.go:322] Node multinode-325713-m02 has CIDR [10.244.1.0/24] 
	I1001 19:56:28.827936       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I1001 19:56:28.828090       1 main.go:299] handling current node
	I1001 19:56:28.828122       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1001 19:56:28.828140       1 main.go:322] Node multinode-325713-m02 has CIDR [10.244.1.0/24] 
	I1001 19:56:38.835788       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I1001 19:56:38.835917       1 main.go:299] handling current node
	I1001 19:56:38.835945       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1001 19:56:38.835963       1 main.go:322] Node multinode-325713-m02 has CIDR [10.244.1.0/24] 
	I1001 19:56:48.836516       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I1001 19:56:48.836598       1 main.go:299] handling current node
	I1001 19:56:48.836619       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1001 19:56:48.836625       1 main.go:322] Node multinode-325713-m02 has CIDR [10.244.1.0/24] 
	I1001 19:56:58.827289       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I1001 19:56:58.827358       1 main.go:299] handling current node
	I1001 19:56:58.827378       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1001 19:56:58.827386       1 main.go:322] Node multinode-325713-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [19d51ef666dc5924ed51fc7a56fd063604dfa6c7c7ed8f6f1edf773e2ddf803a] <==
	I1001 19:51:11.671009       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	W1001 19:51:11.673835       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1001 19:51:11.679054       1 storage_flowcontrol.go:186] APF bootstrap ensurer is exiting
	I1001 19:51:11.681072       1 cluster_authentication_trust_controller.go:466] Shutting down cluster_authentication_trust_controller controller
	I1001 19:51:11.681298       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I1001 19:51:11.681337       1 establishing_controller.go:92] Shutting down EstablishingController
	I1001 19:51:11.681352       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I1001 19:51:11.681370       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I1001 19:51:11.681397       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I1001 19:51:11.681421       1 controller.go:132] Ending legacy_token_tracking_controller
	I1001 19:51:11.681443       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I1001 19:51:11.681458       1 naming_controller.go:305] Shutting down NamingConditionController
	I1001 19:51:11.681487       1 controller.go:120] Shutting down OpenAPI V3 controller
	I1001 19:51:11.681510       1 autoregister_controller.go:168] Shutting down autoregister controller
	I1001 19:51:11.681541       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I1001 19:51:11.682020       1 controller.go:170] Shutting down OpenAPI controller
	I1001 19:51:11.682054       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I1001 19:51:11.682079       1 nonstructuralschema_controller.go:207] Shutting down NonStructuralSchemaConditionController
	I1001 19:51:11.682097       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I1001 19:51:11.683059       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I1001 19:51:11.683083       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	I1001 19:51:11.683099       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I1001 19:51:11.687793       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	W1001 19:51:11.690966       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 19:51:11.691043       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [454294743e585aefdb165075a88ffc9546bf7dac940543d3064fa8252cfe5b64] <==
	I1001 19:52:56.730161       1 aggregator.go:171] initial CRD sync complete...
	I1001 19:52:56.730182       1 autoregister_controller.go:144] Starting autoregister controller
	I1001 19:52:56.730189       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1001 19:52:56.730193       1 cache.go:39] Caches are synced for autoregister controller
	I1001 19:52:56.745684       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1001 19:52:56.745806       1 policy_source.go:224] refreshing policies
	I1001 19:52:56.746635       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1001 19:52:56.786887       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1001 19:52:56.787253       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1001 19:52:56.787310       1 shared_informer.go:320] Caches are synced for configmaps
	I1001 19:52:56.787357       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1001 19:52:56.787238       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1001 19:52:56.788925       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1001 19:52:56.789021       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1001 19:52:56.793553       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E1001 19:52:56.795692       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1001 19:52:56.807226       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1001 19:52:57.593505       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1001 19:52:58.802961       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1001 19:52:58.958315       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1001 19:52:58.971905       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1001 19:52:59.050630       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1001 19:52:59.060920       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1001 19:53:00.043246       1 controller.go:615] quota admission added evaluator for: endpoints
	I1001 19:53:00.442839       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [91728fb10efec288907940bbd25068fbc59d3bf362903c3ee875cb425a7e9570] <==
	I1001 19:54:14.593814       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-325713-m03" podCIDRs=["10.244.2.0/24"]
	I1001 19:54:14.593855       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:14.594073       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:14.606451       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:14.995082       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:15.325003       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:15.460644       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:24.693123       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:33.279070       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:33.279340       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-325713-m03"
	I1001 19:54:33.293523       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:35.241489       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:37.905146       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:37.915695       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:54:38.465019       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-325713-m02"
	I1001 19:54:38.465534       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:55:20.255235       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-7xgfk"
	I1001 19:55:20.264258       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m02"
	I1001 19:55:20.286297       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m02"
	I1001 19:55:20.295821       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.114712ms"
	I1001 19:55:20.295918       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="50.742µs"
	I1001 19:55:20.319182       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-7xgfk"
	I1001 19:55:20.319265       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-7wwrh"
	I1001 19:55:20.344329       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-7wwrh"
	I1001 19:55:25.339215       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m02"
	
	
	==> kube-controller-manager [99e0c7308d4811ed9a37c1503f0c4f652731732c727fddb6b2ca91a0d1d1207a] <==
	I1001 19:48:45.275166       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:48:45.275227       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-325713-m02"
	I1001 19:48:46.437029       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-325713-m03\" does not exist"
	I1001 19:48:46.437619       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-325713-m02"
	I1001 19:48:46.447709       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-325713-m03" podCIDRs=["10.244.3.0/24"]
	I1001 19:48:46.447944       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:48:46.448142       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:48:46.466960       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:48:46.843979       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:48:47.217507       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:48:48.270179       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:48:56.717472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:49:06.078704       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-325713-m02"
	I1001 19:49:06.079173       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:49:06.090897       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:49:08.197269       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:49:48.215127       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:49:48.215626       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-325713-m02"
	I1001 19:49:48.233741       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:49:53.249182       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m02"
	I1001 19:49:53.263364       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m02"
	I1001 19:49:53.305459       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.674576ms"
	I1001 19:49:53.307273       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="170.732µs"
	I1001 19:49:53.317462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m03"
	I1001 19:50:03.394952       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-325713-m02"
	
	
	==> kube-proxy [c753d689839b591bdbd1e4b857d4fc8207b7f30ad75e876545e5c20f6aea9c17] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 19:46:15.245419       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 19:46:15.255000       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.165"]
	E1001 19:46:15.255073       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 19:46:15.307852       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 19:46:15.307891       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 19:46:15.307914       1 server_linux.go:169] "Using iptables Proxier"
	I1001 19:46:15.311233       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 19:46:15.311440       1 server.go:483] "Version info" version="v1.31.1"
	I1001 19:46:15.311451       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:46:15.328491       1 config.go:328] "Starting node config controller"
	I1001 19:46:15.328509       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 19:46:15.329498       1 config.go:199] "Starting service config controller"
	I1001 19:46:15.329507       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 19:46:15.329759       1 config.go:105] "Starting endpoint slice config controller"
	I1001 19:46:15.329770       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 19:46:15.429002       1 shared_informer.go:320] Caches are synced for node config
	I1001 19:46:15.431157       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 19:46:15.431287       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [e133d880f95931cecea791c307dbd3b63126f009258e015eab481ab1ffde4c7a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 19:52:58.223930       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 19:52:58.241763       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.165"]
	E1001 19:52:58.241904       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 19:52:58.282871       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 19:52:58.282913       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 19:52:58.282938       1 server_linux.go:169] "Using iptables Proxier"
	I1001 19:52:58.287352       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 19:52:58.287744       1 server.go:483] "Version info" version="v1.31.1"
	I1001 19:52:58.287795       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:52:58.289227       1 config.go:199] "Starting service config controller"
	I1001 19:52:58.289303       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 19:52:58.289348       1 config.go:105] "Starting endpoint slice config controller"
	I1001 19:52:58.289365       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 19:52:58.290830       1 config.go:328] "Starting node config controller"
	I1001 19:52:58.290965       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 19:52:58.389655       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 19:52:58.389703       1 shared_informer.go:320] Caches are synced for service config
	I1001 19:52:58.392682       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b5825a9ff6472b047a265e5afd5811414044723e46816206e4beb6b418b69e24] <==
	E1001 19:46:06.476270       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 19:46:07.359417       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1001 19:46:07.359523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 19:46:07.391979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1001 19:46:07.392078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 19:46:07.403883       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1001 19:46:07.403981       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1001 19:46:07.502904       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1001 19:46:07.503314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 19:46:07.583540       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1001 19:46:07.583736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 19:46:07.670647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 19:46:07.670928       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 19:46:07.670887       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1001 19:46:07.671842       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 19:46:07.821384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1001 19:46:07.821485       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 19:46:07.845437       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1001 19:46:07.845533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 19:46:07.859913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 19:46:07.859962       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 19:46:07.862212       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1001 19:46:07.862255       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1001 19:46:09.266470       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1001 19:51:11.649872       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cb81c99fa1c2e42286c70d520a49c2ad2ec6c2c8c728399216de29090137bf71] <==
	I1001 19:52:54.160233       1 serving.go:386] Generated self-signed cert in-memory
	W1001 19:52:56.666736       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1001 19:52:56.666826       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1001 19:52:56.666837       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1001 19:52:56.666862       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1001 19:52:56.717822       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1001 19:52:56.717914       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:52:56.720051       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 19:52:56.720163       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1001 19:52:56.720237       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1001 19:52:56.720329       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1001 19:52:56.821143       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 01 19:55:52 multinode-325713 kubelet[2900]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 19:55:52 multinode-325713 kubelet[2900]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 19:55:52 multinode-325713 kubelet[2900]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 19:55:52 multinode-325713 kubelet[2900]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 19:55:52 multinode-325713 kubelet[2900]: E1001 19:55:52.340635    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812552340283004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:55:52 multinode-325713 kubelet[2900]: E1001 19:55:52.340660    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812552340283004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:56:02 multinode-325713 kubelet[2900]: E1001 19:56:02.342883    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812562342480995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:56:02 multinode-325713 kubelet[2900]: E1001 19:56:02.342910    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812562342480995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:56:12 multinode-325713 kubelet[2900]: E1001 19:56:12.344912    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812572344541922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:56:12 multinode-325713 kubelet[2900]: E1001 19:56:12.345204    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812572344541922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:56:22 multinode-325713 kubelet[2900]: E1001 19:56:22.347142    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812582346790795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:56:22 multinode-325713 kubelet[2900]: E1001 19:56:22.347548    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812582346790795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:56:32 multinode-325713 kubelet[2900]: E1001 19:56:32.349630    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812592349088778,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:56:32 multinode-325713 kubelet[2900]: E1001 19:56:32.349672    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812592349088778,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:56:42 multinode-325713 kubelet[2900]: E1001 19:56:42.352272    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812602351916709,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:56:42 multinode-325713 kubelet[2900]: E1001 19:56:42.352299    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812602351916709,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:56:52 multinode-325713 kubelet[2900]: E1001 19:56:52.284427    2900 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 19:56:52 multinode-325713 kubelet[2900]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 19:56:52 multinode-325713 kubelet[2900]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 19:56:52 multinode-325713 kubelet[2900]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 19:56:52 multinode-325713 kubelet[2900]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 19:56:52 multinode-325713 kubelet[2900]: E1001 19:56:52.354729    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812612354295801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:56:52 multinode-325713 kubelet[2900]: E1001 19:56:52.354756    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812612354295801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:57:02 multinode-325713 kubelet[2900]: E1001 19:57:02.356967    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812622356489961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 19:57:02 multinode-325713 kubelet[2900]: E1001 19:57:02.356997    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727812622356489961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 19:57:02.439460   50962 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19736-11198/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-325713 -n multinode-325713
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-325713 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (144.68s)

                                                
                                    
x
+
TestPreload (179.21s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-118977 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1001 20:01:34.839894   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:01:59.026226   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-118977 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m40.606590427s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-118977 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-118977 image pull gcr.io/k8s-minikube/busybox: (3.438318637s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-118977
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-118977: (6.605580821s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-118977 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-118977 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m5.679124814s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-118977 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-10-01 20:03:45.785546669 +0000 UTC m=+4176.734350092
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-118977 -n test-preload-118977
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-118977 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-118977 logs -n 25: (1.076659679s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-325713 ssh -n                                                                 | multinode-325713     | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n multinode-325713 sudo cat                                       | multinode-325713     | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | /home/docker/cp-test_multinode-325713-m03_multinode-325713.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-325713 cp multinode-325713-m03:/home/docker/cp-test.txt                       | multinode-325713     | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m02:/home/docker/cp-test_multinode-325713-m03_multinode-325713-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n                                                                 | multinode-325713     | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | multinode-325713-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-325713 ssh -n multinode-325713-m02 sudo cat                                   | multinode-325713     | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	|         | /home/docker/cp-test_multinode-325713-m03_multinode-325713-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-325713 node stop m03                                                          | multinode-325713     | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:48 UTC |
	| node    | multinode-325713 node start                                                             | multinode-325713     | jenkins | v1.34.0 | 01 Oct 24 19:48 UTC | 01 Oct 24 19:49 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-325713                                                                | multinode-325713     | jenkins | v1.34.0 | 01 Oct 24 19:49 UTC |                     |
	| stop    | -p multinode-325713                                                                     | multinode-325713     | jenkins | v1.34.0 | 01 Oct 24 19:49 UTC |                     |
	| start   | -p multinode-325713                                                                     | multinode-325713     | jenkins | v1.34.0 | 01 Oct 24 19:51 UTC | 01 Oct 24 19:54 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-325713                                                                | multinode-325713     | jenkins | v1.34.0 | 01 Oct 24 19:54 UTC |                     |
	| node    | multinode-325713 node delete                                                            | multinode-325713     | jenkins | v1.34.0 | 01 Oct 24 19:54 UTC | 01 Oct 24 19:54 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-325713 stop                                                                   | multinode-325713     | jenkins | v1.34.0 | 01 Oct 24 19:54 UTC |                     |
	| start   | -p multinode-325713                                                                     | multinode-325713     | jenkins | v1.34.0 | 01 Oct 24 19:57 UTC | 01 Oct 24 20:00 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-325713                                                                | multinode-325713     | jenkins | v1.34.0 | 01 Oct 24 20:00 UTC |                     |
	| start   | -p multinode-325713-m02                                                                 | multinode-325713-m02 | jenkins | v1.34.0 | 01 Oct 24 20:00 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-325713-m03                                                                 | multinode-325713-m03 | jenkins | v1.34.0 | 01 Oct 24 20:00 UTC | 01 Oct 24 20:00 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-325713                                                                 | multinode-325713     | jenkins | v1.34.0 | 01 Oct 24 20:00 UTC |                     |
	| delete  | -p multinode-325713-m03                                                                 | multinode-325713-m03 | jenkins | v1.34.0 | 01 Oct 24 20:00 UTC | 01 Oct 24 20:00 UTC |
	| delete  | -p multinode-325713                                                                     | multinode-325713     | jenkins | v1.34.0 | 01 Oct 24 20:00 UTC | 01 Oct 24 20:00 UTC |
	| start   | -p test-preload-118977                                                                  | test-preload-118977  | jenkins | v1.34.0 | 01 Oct 24 20:00 UTC | 01 Oct 24 20:02 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-118977 image pull                                                          | test-preload-118977  | jenkins | v1.34.0 | 01 Oct 24 20:02 UTC | 01 Oct 24 20:02 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-118977                                                                  | test-preload-118977  | jenkins | v1.34.0 | 01 Oct 24 20:02 UTC | 01 Oct 24 20:02 UTC |
	| start   | -p test-preload-118977                                                                  | test-preload-118977  | jenkins | v1.34.0 | 01 Oct 24 20:02 UTC | 01 Oct 24 20:03 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-118977 image list                                                          | test-preload-118977  | jenkins | v1.34.0 | 01 Oct 24 20:03 UTC | 01 Oct 24 20:03 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 20:02:39
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 20:02:39.926603   53352 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:02:39.926819   53352 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:02:39.926827   53352 out.go:358] Setting ErrFile to fd 2...
	I1001 20:02:39.926831   53352 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:02:39.926996   53352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 20:02:39.927528   53352 out.go:352] Setting JSON to false
	I1001 20:02:39.928391   53352 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6302,"bootTime":1727806658,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 20:02:39.928485   53352 start.go:139] virtualization: kvm guest
	I1001 20:02:39.930223   53352 out.go:177] * [test-preload-118977] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 20:02:39.931503   53352 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 20:02:39.931547   53352 notify.go:220] Checking for updates...
	I1001 20:02:39.933628   53352 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 20:02:39.934635   53352 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:02:39.935621   53352 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:02:39.936591   53352 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 20:02:39.937694   53352 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 20:02:39.939119   53352 config.go:182] Loaded profile config "test-preload-118977": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1001 20:02:39.939499   53352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:02:39.939561   53352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:02:39.955002   53352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40893
	I1001 20:02:39.955522   53352 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:02:39.956101   53352 main.go:141] libmachine: Using API Version  1
	I1001 20:02:39.956120   53352 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:02:39.956493   53352 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:02:39.956682   53352 main.go:141] libmachine: (test-preload-118977) Calling .DriverName
	I1001 20:02:39.958231   53352 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1001 20:02:39.959273   53352 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 20:02:39.959580   53352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:02:39.959622   53352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:02:39.974049   53352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34003
	I1001 20:02:39.974509   53352 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:02:39.975020   53352 main.go:141] libmachine: Using API Version  1
	I1001 20:02:39.975040   53352 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:02:39.975331   53352 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:02:39.975531   53352 main.go:141] libmachine: (test-preload-118977) Calling .DriverName
	I1001 20:02:40.010076   53352 out.go:177] * Using the kvm2 driver based on existing profile
	I1001 20:02:40.011633   53352 start.go:297] selected driver: kvm2
	I1001 20:02:40.011651   53352 start.go:901] validating driver "kvm2" against &{Name:test-preload-118977 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.24.4 ClusterName:test-preload-118977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:02:40.011759   53352 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 20:02:40.012495   53352 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:02:40.012561   53352 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 20:02:40.027982   53352 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 20:02:40.028345   53352 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:02:40.028402   53352 cni.go:84] Creating CNI manager for ""
	I1001 20:02:40.028450   53352 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:02:40.028510   53352 start.go:340] cluster config:
	{Name:test-preload-118977 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-118977 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:02:40.028610   53352 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:02:40.030368   53352 out.go:177] * Starting "test-preload-118977" primary control-plane node in "test-preload-118977" cluster
	I1001 20:02:40.031673   53352 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1001 20:02:40.523288   53352 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1001 20:02:40.523341   53352 cache.go:56] Caching tarball of preloaded images
	I1001 20:02:40.523530   53352 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1001 20:02:40.525362   53352 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1001 20:02:40.526469   53352 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1001 20:02:40.627114   53352 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1001 20:02:51.663817   53352 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1001 20:02:51.663920   53352 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1001 20:02:52.503272   53352 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I1001 20:02:52.503388   53352 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/test-preload-118977/config.json ...
	I1001 20:02:52.503631   53352 start.go:360] acquireMachinesLock for test-preload-118977: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 20:02:52.503691   53352 start.go:364] duration metric: took 40.129µs to acquireMachinesLock for "test-preload-118977"
	I1001 20:02:52.503706   53352 start.go:96] Skipping create...Using existing machine configuration
	I1001 20:02:52.503712   53352 fix.go:54] fixHost starting: 
	I1001 20:02:52.503967   53352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:02:52.503999   53352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:02:52.518653   53352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43589
	I1001 20:02:52.519132   53352 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:02:52.519650   53352 main.go:141] libmachine: Using API Version  1
	I1001 20:02:52.519674   53352 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:02:52.520011   53352 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:02:52.520192   53352 main.go:141] libmachine: (test-preload-118977) Calling .DriverName
	I1001 20:02:52.520324   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetState
	I1001 20:02:52.522030   53352 fix.go:112] recreateIfNeeded on test-preload-118977: state=Stopped err=<nil>
	I1001 20:02:52.522050   53352 main.go:141] libmachine: (test-preload-118977) Calling .DriverName
	W1001 20:02:52.522204   53352 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 20:02:52.524248   53352 out.go:177] * Restarting existing kvm2 VM for "test-preload-118977" ...
	I1001 20:02:52.525421   53352 main.go:141] libmachine: (test-preload-118977) Calling .Start
	I1001 20:02:52.525621   53352 main.go:141] libmachine: (test-preload-118977) Ensuring networks are active...
	I1001 20:02:52.526464   53352 main.go:141] libmachine: (test-preload-118977) Ensuring network default is active
	I1001 20:02:52.526838   53352 main.go:141] libmachine: (test-preload-118977) Ensuring network mk-test-preload-118977 is active
	I1001 20:02:52.527170   53352 main.go:141] libmachine: (test-preload-118977) Getting domain xml...
	I1001 20:02:52.527841   53352 main.go:141] libmachine: (test-preload-118977) Creating domain...
	I1001 20:02:53.745107   53352 main.go:141] libmachine: (test-preload-118977) Waiting to get IP...
	I1001 20:02:53.746159   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:02:53.746500   53352 main.go:141] libmachine: (test-preload-118977) DBG | unable to find current IP address of domain test-preload-118977 in network mk-test-preload-118977
	I1001 20:02:53.746598   53352 main.go:141] libmachine: (test-preload-118977) DBG | I1001 20:02:53.746488   53427 retry.go:31] will retry after 211.221255ms: waiting for machine to come up
	I1001 20:02:53.959077   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:02:53.959543   53352 main.go:141] libmachine: (test-preload-118977) DBG | unable to find current IP address of domain test-preload-118977 in network mk-test-preload-118977
	I1001 20:02:53.959600   53352 main.go:141] libmachine: (test-preload-118977) DBG | I1001 20:02:53.959531   53427 retry.go:31] will retry after 342.925479ms: waiting for machine to come up
	I1001 20:02:54.304380   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:02:54.304801   53352 main.go:141] libmachine: (test-preload-118977) DBG | unable to find current IP address of domain test-preload-118977 in network mk-test-preload-118977
	I1001 20:02:54.304823   53352 main.go:141] libmachine: (test-preload-118977) DBG | I1001 20:02:54.304758   53427 retry.go:31] will retry after 398.236796ms: waiting for machine to come up
	I1001 20:02:54.704466   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:02:54.704987   53352 main.go:141] libmachine: (test-preload-118977) DBG | unable to find current IP address of domain test-preload-118977 in network mk-test-preload-118977
	I1001 20:02:54.705018   53352 main.go:141] libmachine: (test-preload-118977) DBG | I1001 20:02:54.704917   53427 retry.go:31] will retry after 556.342179ms: waiting for machine to come up
	I1001 20:02:55.262837   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:02:55.263328   53352 main.go:141] libmachine: (test-preload-118977) DBG | unable to find current IP address of domain test-preload-118977 in network mk-test-preload-118977
	I1001 20:02:55.263354   53352 main.go:141] libmachine: (test-preload-118977) DBG | I1001 20:02:55.263296   53427 retry.go:31] will retry after 657.721759ms: waiting for machine to come up
	I1001 20:02:55.922217   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:02:55.922625   53352 main.go:141] libmachine: (test-preload-118977) DBG | unable to find current IP address of domain test-preload-118977 in network mk-test-preload-118977
	I1001 20:02:55.922662   53352 main.go:141] libmachine: (test-preload-118977) DBG | I1001 20:02:55.922568   53427 retry.go:31] will retry after 656.141484ms: waiting for machine to come up
	I1001 20:02:56.580437   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:02:56.580976   53352 main.go:141] libmachine: (test-preload-118977) DBG | unable to find current IP address of domain test-preload-118977 in network mk-test-preload-118977
	I1001 20:02:56.581005   53352 main.go:141] libmachine: (test-preload-118977) DBG | I1001 20:02:56.580919   53427 retry.go:31] will retry after 932.72401ms: waiting for machine to come up
	I1001 20:02:57.515132   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:02:57.515617   53352 main.go:141] libmachine: (test-preload-118977) DBG | unable to find current IP address of domain test-preload-118977 in network mk-test-preload-118977
	I1001 20:02:57.515639   53352 main.go:141] libmachine: (test-preload-118977) DBG | I1001 20:02:57.515583   53427 retry.go:31] will retry after 1.180028468s: waiting for machine to come up
	I1001 20:02:58.696967   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:02:58.697382   53352 main.go:141] libmachine: (test-preload-118977) DBG | unable to find current IP address of domain test-preload-118977 in network mk-test-preload-118977
	I1001 20:02:58.697411   53352 main.go:141] libmachine: (test-preload-118977) DBG | I1001 20:02:58.697306   53427 retry.go:31] will retry after 1.427196238s: waiting for machine to come up
	I1001 20:03:00.125817   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:00.126262   53352 main.go:141] libmachine: (test-preload-118977) DBG | unable to find current IP address of domain test-preload-118977 in network mk-test-preload-118977
	I1001 20:03:00.126292   53352 main.go:141] libmachine: (test-preload-118977) DBG | I1001 20:03:00.126214   53427 retry.go:31] will retry after 2.195954782s: waiting for machine to come up
	I1001 20:03:02.323987   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:02.324449   53352 main.go:141] libmachine: (test-preload-118977) DBG | unable to find current IP address of domain test-preload-118977 in network mk-test-preload-118977
	I1001 20:03:02.324474   53352 main.go:141] libmachine: (test-preload-118977) DBG | I1001 20:03:02.324390   53427 retry.go:31] will retry after 2.370626259s: waiting for machine to come up
	I1001 20:03:04.697694   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:04.698115   53352 main.go:141] libmachine: (test-preload-118977) DBG | unable to find current IP address of domain test-preload-118977 in network mk-test-preload-118977
	I1001 20:03:04.698141   53352 main.go:141] libmachine: (test-preload-118977) DBG | I1001 20:03:04.698071   53427 retry.go:31] will retry after 2.580459598s: waiting for machine to come up
	I1001 20:03:07.281966   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:07.282358   53352 main.go:141] libmachine: (test-preload-118977) DBG | unable to find current IP address of domain test-preload-118977 in network mk-test-preload-118977
	I1001 20:03:07.282392   53352 main.go:141] libmachine: (test-preload-118977) DBG | I1001 20:03:07.282339   53427 retry.go:31] will retry after 3.364180823s: waiting for machine to come up
	I1001 20:03:10.649452   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:10.649892   53352 main.go:141] libmachine: (test-preload-118977) Found IP for machine: 192.168.39.195
	I1001 20:03:10.649913   53352 main.go:141] libmachine: (test-preload-118977) Reserving static IP address...
	I1001 20:03:10.649950   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has current primary IP address 192.168.39.195 and MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:10.650386   53352 main.go:141] libmachine: (test-preload-118977) DBG | found host DHCP lease matching {name: "test-preload-118977", mac: "52:54:00:c9:aa:30", ip: "192.168.39.195"} in network mk-test-preload-118977: {Iface:virbr1 ExpiryTime:2024-10-01 21:03:02 +0000 UTC Type:0 Mac:52:54:00:c9:aa:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-118977 Clientid:01:52:54:00:c9:aa:30}
	I1001 20:03:10.650401   53352 main.go:141] libmachine: (test-preload-118977) Reserved static IP address: 192.168.39.195
	I1001 20:03:10.650413   53352 main.go:141] libmachine: (test-preload-118977) DBG | skip adding static IP to network mk-test-preload-118977 - found existing host DHCP lease matching {name: "test-preload-118977", mac: "52:54:00:c9:aa:30", ip: "192.168.39.195"}
	I1001 20:03:10.650458   53352 main.go:141] libmachine: (test-preload-118977) Waiting for SSH to be available...
	I1001 20:03:10.650499   53352 main.go:141] libmachine: (test-preload-118977) DBG | Getting to WaitForSSH function...
	I1001 20:03:10.652620   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:10.652976   53352 main.go:141] libmachine: (test-preload-118977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:aa:30", ip: ""} in network mk-test-preload-118977: {Iface:virbr1 ExpiryTime:2024-10-01 21:03:02 +0000 UTC Type:0 Mac:52:54:00:c9:aa:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-118977 Clientid:01:52:54:00:c9:aa:30}
	I1001 20:03:10.653002   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined IP address 192.168.39.195 and MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:10.653116   53352 main.go:141] libmachine: (test-preload-118977) DBG | Using SSH client type: external
	I1001 20:03:10.653136   53352 main.go:141] libmachine: (test-preload-118977) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/test-preload-118977/id_rsa (-rw-------)
	I1001 20:03:10.653168   53352 main.go:141] libmachine: (test-preload-118977) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.195 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/test-preload-118977/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 20:03:10.653186   53352 main.go:141] libmachine: (test-preload-118977) DBG | About to run SSH command:
	I1001 20:03:10.653198   53352 main.go:141] libmachine: (test-preload-118977) DBG | exit 0
	I1001 20:03:10.776539   53352 main.go:141] libmachine: (test-preload-118977) DBG | SSH cmd err, output: <nil>: 
	I1001 20:03:10.776985   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetConfigRaw
	I1001 20:03:10.777584   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetIP
	I1001 20:03:10.780101   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:10.780542   53352 main.go:141] libmachine: (test-preload-118977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:aa:30", ip: ""} in network mk-test-preload-118977: {Iface:virbr1 ExpiryTime:2024-10-01 21:03:02 +0000 UTC Type:0 Mac:52:54:00:c9:aa:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-118977 Clientid:01:52:54:00:c9:aa:30}
	I1001 20:03:10.780573   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined IP address 192.168.39.195 and MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:10.780834   53352 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/test-preload-118977/config.json ...
	I1001 20:03:10.781021   53352 machine.go:93] provisionDockerMachine start ...
	I1001 20:03:10.781037   53352 main.go:141] libmachine: (test-preload-118977) Calling .DriverName
	I1001 20:03:10.781269   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHHostname
	I1001 20:03:10.783339   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:10.783688   53352 main.go:141] libmachine: (test-preload-118977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:aa:30", ip: ""} in network mk-test-preload-118977: {Iface:virbr1 ExpiryTime:2024-10-01 21:03:02 +0000 UTC Type:0 Mac:52:54:00:c9:aa:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-118977 Clientid:01:52:54:00:c9:aa:30}
	I1001 20:03:10.783721   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined IP address 192.168.39.195 and MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:10.783812   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHPort
	I1001 20:03:10.783979   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHKeyPath
	I1001 20:03:10.784114   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHKeyPath
	I1001 20:03:10.784234   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHUsername
	I1001 20:03:10.784388   53352 main.go:141] libmachine: Using SSH client type: native
	I1001 20:03:10.784620   53352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1001 20:03:10.784635   53352 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 20:03:10.884932   53352 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1001 20:03:10.884960   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetMachineName
	I1001 20:03:10.885201   53352 buildroot.go:166] provisioning hostname "test-preload-118977"
	I1001 20:03:10.885230   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetMachineName
	I1001 20:03:10.885426   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHHostname
	I1001 20:03:10.888007   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:10.888337   53352 main.go:141] libmachine: (test-preload-118977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:aa:30", ip: ""} in network mk-test-preload-118977: {Iface:virbr1 ExpiryTime:2024-10-01 21:03:02 +0000 UTC Type:0 Mac:52:54:00:c9:aa:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-118977 Clientid:01:52:54:00:c9:aa:30}
	I1001 20:03:10.888393   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined IP address 192.168.39.195 and MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:10.888559   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHPort
	I1001 20:03:10.888707   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHKeyPath
	I1001 20:03:10.888808   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHKeyPath
	I1001 20:03:10.888944   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHUsername
	I1001 20:03:10.889066   53352 main.go:141] libmachine: Using SSH client type: native
	I1001 20:03:10.889283   53352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1001 20:03:10.889302   53352 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-118977 && echo "test-preload-118977" | sudo tee /etc/hostname
	I1001 20:03:11.003629   53352 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-118977
	
	I1001 20:03:11.003659   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHHostname
	I1001 20:03:11.006181   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:11.006533   53352 main.go:141] libmachine: (test-preload-118977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:aa:30", ip: ""} in network mk-test-preload-118977: {Iface:virbr1 ExpiryTime:2024-10-01 21:03:02 +0000 UTC Type:0 Mac:52:54:00:c9:aa:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-118977 Clientid:01:52:54:00:c9:aa:30}
	I1001 20:03:11.006560   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined IP address 192.168.39.195 and MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:11.006705   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHPort
	I1001 20:03:11.006891   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHKeyPath
	I1001 20:03:11.007091   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHKeyPath
	I1001 20:03:11.007224   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHUsername
	I1001 20:03:11.007415   53352 main.go:141] libmachine: Using SSH client type: native
	I1001 20:03:11.007608   53352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1001 20:03:11.007630   53352 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-118977' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-118977/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-118977' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 20:03:11.118894   53352 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:03:11.118930   53352 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 20:03:11.118960   53352 buildroot.go:174] setting up certificates
	I1001 20:03:11.118971   53352 provision.go:84] configureAuth start
	I1001 20:03:11.118985   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetMachineName
	I1001 20:03:11.119273   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetIP
	I1001 20:03:11.122036   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:11.122372   53352 main.go:141] libmachine: (test-preload-118977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:aa:30", ip: ""} in network mk-test-preload-118977: {Iface:virbr1 ExpiryTime:2024-10-01 21:03:02 +0000 UTC Type:0 Mac:52:54:00:c9:aa:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-118977 Clientid:01:52:54:00:c9:aa:30}
	I1001 20:03:11.122400   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined IP address 192.168.39.195 and MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:11.122585   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHHostname
	I1001 20:03:11.124611   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:11.124889   53352 main.go:141] libmachine: (test-preload-118977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:aa:30", ip: ""} in network mk-test-preload-118977: {Iface:virbr1 ExpiryTime:2024-10-01 21:03:02 +0000 UTC Type:0 Mac:52:54:00:c9:aa:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-118977 Clientid:01:52:54:00:c9:aa:30}
	I1001 20:03:11.124907   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined IP address 192.168.39.195 and MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:11.125005   53352 provision.go:143] copyHostCerts
	I1001 20:03:11.125073   53352 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 20:03:11.125087   53352 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 20:03:11.125169   53352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 20:03:11.125289   53352 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 20:03:11.125299   53352 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 20:03:11.125339   53352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 20:03:11.125415   53352 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 20:03:11.125425   53352 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 20:03:11.125458   53352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 20:03:11.125525   53352 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.test-preload-118977 san=[127.0.0.1 192.168.39.195 localhost minikube test-preload-118977]
	I1001 20:03:11.209016   53352 provision.go:177] copyRemoteCerts
	I1001 20:03:11.209078   53352 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 20:03:11.209115   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHHostname
	I1001 20:03:11.211845   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:11.212208   53352 main.go:141] libmachine: (test-preload-118977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:aa:30", ip: ""} in network mk-test-preload-118977: {Iface:virbr1 ExpiryTime:2024-10-01 21:03:02 +0000 UTC Type:0 Mac:52:54:00:c9:aa:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-118977 Clientid:01:52:54:00:c9:aa:30}
	I1001 20:03:11.212234   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined IP address 192.168.39.195 and MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:11.212437   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHPort
	I1001 20:03:11.212619   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHKeyPath
	I1001 20:03:11.212772   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHUsername
	I1001 20:03:11.212993   53352 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/test-preload-118977/id_rsa Username:docker}
	I1001 20:03:11.294734   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 20:03:11.321175   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1001 20:03:11.346852   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 20:03:11.371757   53352 provision.go:87] duration metric: took 252.771101ms to configureAuth
	I1001 20:03:11.371788   53352 buildroot.go:189] setting minikube options for container-runtime
	I1001 20:03:11.371950   53352 config.go:182] Loaded profile config "test-preload-118977": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1001 20:03:11.372018   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHHostname
	I1001 20:03:11.374416   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:11.374712   53352 main.go:141] libmachine: (test-preload-118977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:aa:30", ip: ""} in network mk-test-preload-118977: {Iface:virbr1 ExpiryTime:2024-10-01 21:03:02 +0000 UTC Type:0 Mac:52:54:00:c9:aa:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-118977 Clientid:01:52:54:00:c9:aa:30}
	I1001 20:03:11.374740   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined IP address 192.168.39.195 and MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:11.374897   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHPort
	I1001 20:03:11.375059   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHKeyPath
	I1001 20:03:11.375214   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHKeyPath
	I1001 20:03:11.375316   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHUsername
	I1001 20:03:11.375518   53352 main.go:141] libmachine: Using SSH client type: native
	I1001 20:03:11.375712   53352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1001 20:03:11.375736   53352 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 20:03:11.594518   53352 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 20:03:11.594546   53352 machine.go:96] duration metric: took 813.513468ms to provisionDockerMachine
	I1001 20:03:11.594559   53352 start.go:293] postStartSetup for "test-preload-118977" (driver="kvm2")
	I1001 20:03:11.594571   53352 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 20:03:11.594591   53352 main.go:141] libmachine: (test-preload-118977) Calling .DriverName
	I1001 20:03:11.594929   53352 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 20:03:11.594968   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHHostname
	I1001 20:03:11.598152   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:11.598589   53352 main.go:141] libmachine: (test-preload-118977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:aa:30", ip: ""} in network mk-test-preload-118977: {Iface:virbr1 ExpiryTime:2024-10-01 21:03:02 +0000 UTC Type:0 Mac:52:54:00:c9:aa:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-118977 Clientid:01:52:54:00:c9:aa:30}
	I1001 20:03:11.598620   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined IP address 192.168.39.195 and MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:11.598755   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHPort
	I1001 20:03:11.598959   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHKeyPath
	I1001 20:03:11.599129   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHUsername
	I1001 20:03:11.599275   53352 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/test-preload-118977/id_rsa Username:docker}
	I1001 20:03:11.678938   53352 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 20:03:11.683599   53352 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 20:03:11.683634   53352 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 20:03:11.683722   53352 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 20:03:11.683815   53352 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 20:03:11.683918   53352 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 20:03:11.693552   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:03:11.717413   53352 start.go:296] duration metric: took 122.837867ms for postStartSetup
	I1001 20:03:11.717461   53352 fix.go:56] duration metric: took 19.213748057s for fixHost
	I1001 20:03:11.717486   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHHostname
	I1001 20:03:11.720506   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:11.720943   53352 main.go:141] libmachine: (test-preload-118977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:aa:30", ip: ""} in network mk-test-preload-118977: {Iface:virbr1 ExpiryTime:2024-10-01 21:03:02 +0000 UTC Type:0 Mac:52:54:00:c9:aa:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-118977 Clientid:01:52:54:00:c9:aa:30}
	I1001 20:03:11.720990   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined IP address 192.168.39.195 and MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:11.721160   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHPort
	I1001 20:03:11.721368   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHKeyPath
	I1001 20:03:11.721512   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHKeyPath
	I1001 20:03:11.721602   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHUsername
	I1001 20:03:11.721719   53352 main.go:141] libmachine: Using SSH client type: native
	I1001 20:03:11.721874   53352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1001 20:03:11.721884   53352 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 20:03:11.820970   53352 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727812991.781280100
	
	I1001 20:03:11.820991   53352 fix.go:216] guest clock: 1727812991.781280100
	I1001 20:03:11.820999   53352 fix.go:229] Guest: 2024-10-01 20:03:11.7812801 +0000 UTC Remote: 2024-10-01 20:03:11.717466784 +0000 UTC m=+31.824463853 (delta=63.813316ms)
	I1001 20:03:11.821023   53352 fix.go:200] guest clock delta is within tolerance: 63.813316ms
	I1001 20:03:11.821027   53352 start.go:83] releasing machines lock for "test-preload-118977", held for 19.317326647s
	I1001 20:03:11.821043   53352 main.go:141] libmachine: (test-preload-118977) Calling .DriverName
	I1001 20:03:11.821294   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetIP
	I1001 20:03:11.824158   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:11.824588   53352 main.go:141] libmachine: (test-preload-118977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:aa:30", ip: ""} in network mk-test-preload-118977: {Iface:virbr1 ExpiryTime:2024-10-01 21:03:02 +0000 UTC Type:0 Mac:52:54:00:c9:aa:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-118977 Clientid:01:52:54:00:c9:aa:30}
	I1001 20:03:11.824619   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined IP address 192.168.39.195 and MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:11.824782   53352 main.go:141] libmachine: (test-preload-118977) Calling .DriverName
	I1001 20:03:11.825280   53352 main.go:141] libmachine: (test-preload-118977) Calling .DriverName
	I1001 20:03:11.825462   53352 main.go:141] libmachine: (test-preload-118977) Calling .DriverName
	I1001 20:03:11.825562   53352 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 20:03:11.825600   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHHostname
	I1001 20:03:11.825695   53352 ssh_runner.go:195] Run: cat /version.json
	I1001 20:03:11.825721   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHHostname
	I1001 20:03:11.828051   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:11.828221   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:11.828473   53352 main.go:141] libmachine: (test-preload-118977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:aa:30", ip: ""} in network mk-test-preload-118977: {Iface:virbr1 ExpiryTime:2024-10-01 21:03:02 +0000 UTC Type:0 Mac:52:54:00:c9:aa:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-118977 Clientid:01:52:54:00:c9:aa:30}
	I1001 20:03:11.828506   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined IP address 192.168.39.195 and MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:11.828624   53352 main.go:141] libmachine: (test-preload-118977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:aa:30", ip: ""} in network mk-test-preload-118977: {Iface:virbr1 ExpiryTime:2024-10-01 21:03:02 +0000 UTC Type:0 Mac:52:54:00:c9:aa:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-118977 Clientid:01:52:54:00:c9:aa:30}
	I1001 20:03:11.828647   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined IP address 192.168.39.195 and MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:11.828649   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHPort
	I1001 20:03:11.828816   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHPort
	I1001 20:03:11.828821   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHKeyPath
	I1001 20:03:11.829013   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHKeyPath
	I1001 20:03:11.829025   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHUsername
	I1001 20:03:11.829154   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHUsername
	I1001 20:03:11.829164   53352 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/test-preload-118977/id_rsa Username:docker}
	I1001 20:03:11.829245   53352 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/test-preload-118977/id_rsa Username:docker}
	I1001 20:03:11.944015   53352 ssh_runner.go:195] Run: systemctl --version
	I1001 20:03:11.949876   53352 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 20:03:12.087863   53352 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 20:03:12.094925   53352 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 20:03:12.094988   53352 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 20:03:12.111800   53352 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 20:03:12.111825   53352 start.go:495] detecting cgroup driver to use...
	I1001 20:03:12.111892   53352 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 20:03:12.129979   53352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 20:03:12.144315   53352 docker.go:217] disabling cri-docker service (if available) ...
	I1001 20:03:12.144392   53352 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 20:03:12.158358   53352 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 20:03:12.172974   53352 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 20:03:12.298713   53352 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 20:03:12.435712   53352 docker.go:233] disabling docker service ...
	I1001 20:03:12.435769   53352 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 20:03:12.450088   53352 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 20:03:12.463030   53352 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 20:03:12.591365   53352 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 20:03:12.702075   53352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 20:03:12.715742   53352 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 20:03:12.735674   53352 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1001 20:03:12.735748   53352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:03:12.746035   53352 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 20:03:12.746100   53352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:03:12.756432   53352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:03:12.767143   53352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:03:12.777432   53352 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 20:03:12.787825   53352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:03:12.797623   53352 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:03:12.814403   53352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:03:12.824440   53352 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 20:03:12.833942   53352 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 20:03:12.834012   53352 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 20:03:12.846035   53352 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 20:03:12.855432   53352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:03:12.963636   53352 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 20:03:13.050521   53352 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 20:03:13.050593   53352 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 20:03:13.055514   53352 start.go:563] Will wait 60s for crictl version
	I1001 20:03:13.055581   53352 ssh_runner.go:195] Run: which crictl
	I1001 20:03:13.059232   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 20:03:13.099132   53352 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 20:03:13.099243   53352 ssh_runner.go:195] Run: crio --version
	I1001 20:03:13.128038   53352 ssh_runner.go:195] Run: crio --version
	I1001 20:03:13.158360   53352 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I1001 20:03:13.159617   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetIP
	I1001 20:03:13.162127   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:13.162488   53352 main.go:141] libmachine: (test-preload-118977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:aa:30", ip: ""} in network mk-test-preload-118977: {Iface:virbr1 ExpiryTime:2024-10-01 21:03:02 +0000 UTC Type:0 Mac:52:54:00:c9:aa:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-118977 Clientid:01:52:54:00:c9:aa:30}
	I1001 20:03:13.162526   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined IP address 192.168.39.195 and MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:13.162699   53352 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 20:03:13.167070   53352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:03:13.180951   53352 kubeadm.go:883] updating cluster {Name:test-preload-118977 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.24.4 ClusterName:test-preload-118977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 20:03:13.181073   53352 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1001 20:03:13.181122   53352 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:03:13.217585   53352 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1001 20:03:13.217641   53352 ssh_runner.go:195] Run: which lz4
	I1001 20:03:13.222243   53352 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 20:03:13.226602   53352 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 20:03:13.226639   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1001 20:03:14.704183   53352 crio.go:462] duration metric: took 1.48196753s to copy over tarball
	I1001 20:03:14.704251   53352 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 20:03:17.176861   53352 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.472586106s)
	I1001 20:03:17.176888   53352 crio.go:469] duration metric: took 2.472676044s to extract the tarball
	I1001 20:03:17.176897   53352 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 20:03:17.217451   53352 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:03:17.262695   53352 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1001 20:03:17.262722   53352 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1001 20:03:17.262791   53352 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:03:17.262814   53352 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1001 20:03:17.262829   53352 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1001 20:03:17.262858   53352 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 20:03:17.262872   53352 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1001 20:03:17.262883   53352 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1001 20:03:17.262911   53352 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1001 20:03:17.262866   53352 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1001 20:03:17.264283   53352 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1001 20:03:17.264343   53352 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1001 20:03:17.264382   53352 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 20:03:17.264347   53352 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1001 20:03:17.264426   53352 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1001 20:03:17.264344   53352 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:03:17.264390   53352 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1001 20:03:17.264351   53352 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1001 20:03:17.528733   53352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1001 20:03:17.562004   53352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1001 20:03:17.570051   53352 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1001 20:03:17.570091   53352 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1001 20:03:17.570126   53352 ssh_runner.go:195] Run: which crictl
	I1001 20:03:17.593524   53352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1001 20:03:17.599221   53352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1001 20:03:17.605146   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1001 20:03:17.605193   53352 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1001 20:03:17.605228   53352 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1001 20:03:17.605260   53352 ssh_runner.go:195] Run: which crictl
	I1001 20:03:17.634735   53352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1001 20:03:17.663562   53352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1001 20:03:17.672789   53352 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1001 20:03:17.672850   53352 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1001 20:03:17.672900   53352 ssh_runner.go:195] Run: which crictl
	I1001 20:03:17.675891   53352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1001 20:03:17.692641   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1001 20:03:17.692681   53352 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1001 20:03:17.692725   53352 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1001 20:03:17.692764   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1001 20:03:17.692766   53352 ssh_runner.go:195] Run: which crictl
	I1001 20:03:17.742155   53352 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1001 20:03:17.742201   53352 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 20:03:17.742254   53352 ssh_runner.go:195] Run: which crictl
	I1001 20:03:17.772991   53352 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1001 20:03:17.773043   53352 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1001 20:03:17.773065   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1001 20:03:17.773089   53352 ssh_runner.go:195] Run: which crictl
	I1001 20:03:17.773101   53352 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1001 20:03:17.773126   53352 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1001 20:03:17.773166   53352 ssh_runner.go:195] Run: which crictl
	I1001 20:03:17.807849   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1001 20:03:17.807923   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1001 20:03:17.808882   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1001 20:03:17.864647   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1001 20:03:17.864732   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1001 20:03:17.864792   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1001 20:03:17.864879   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1001 20:03:17.864953   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1001 20:03:17.907172   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1001 20:03:17.907218   53352 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1001 20:03:17.907317   53352 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1001 20:03:17.956864   53352 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1001 20:03:17.956978   53352 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1001 20:03:17.996482   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1001 20:03:18.001144   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1001 20:03:18.001193   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1001 20:03:18.008681   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1001 20:03:18.046032   53352 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1001 20:03:18.046059   53352 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1001 20:03:18.046091   53352 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1001 20:03:18.046110   53352 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1001 20:03:18.046225   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1001 20:03:18.115909   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1001 20:03:18.115952   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1001 20:03:18.115962   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1001 20:03:18.140697   53352 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1001 20:03:18.140836   53352 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1001 20:03:18.577171   53352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:03:20.822317   53352 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (2.776184219s)
	I1001 20:03:20.822349   53352 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1001 20:03:20.822374   53352 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4: (2.776132472s)
	I1001 20:03:20.822381   53352 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1001 20:03:20.822403   53352 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1001 20:03:20.822427   53352 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1001 20:03:20.822490   53352 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1001 20:03:20.822534   53352 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0: (2.706557471s)
	I1001 20:03:20.822574   53352 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1001 20:03:20.822574   53352 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6: (2.706596536s)
	I1001 20:03:20.822631   53352 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1001 20:03:20.822663   53352 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.681809879s)
	I1001 20:03:20.822713   53352 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1001 20:03:20.822728   53352 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.245529506s)
	I1001 20:03:20.822670   53352 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1001 20:03:20.822742   53352 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1001 20:03:20.822633   53352 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4: (2.706698488s)
	I1001 20:03:20.822836   53352 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1001 20:03:20.822899   53352 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1001 20:03:20.976042   53352 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1001 20:03:20.976098   53352 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1001 20:03:20.976102   53352 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1001 20:03:20.976142   53352 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1001 20:03:20.976160   53352 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1001 20:03:20.976184   53352 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1001 20:03:20.976229   53352 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1001 20:03:21.421851   53352 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1001 20:03:21.421903   53352 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1001 20:03:21.421944   53352 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1001 20:03:22.168405   53352 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1001 20:03:22.168453   53352 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1001 20:03:22.168526   53352 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1001 20:03:22.614845   53352 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1001 20:03:22.614906   53352 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1001 20:03:22.614946   53352 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1001 20:03:24.563317   53352 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (1.948341209s)
	I1001 20:03:24.563347   53352 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1001 20:03:24.563370   53352 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1001 20:03:24.563405   53352 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1001 20:03:25.313835   53352 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1001 20:03:25.313874   53352 cache_images.go:123] Successfully loaded all cached images
	I1001 20:03:25.313879   53352 cache_images.go:92] duration metric: took 8.051146227s to LoadCachedImages
	I1001 20:03:25.313889   53352 kubeadm.go:934] updating node { 192.168.39.195 8443 v1.24.4 crio true true} ...
	I1001 20:03:25.313995   53352 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-118977 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-118977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 20:03:25.314083   53352 ssh_runner.go:195] Run: crio config
	I1001 20:03:25.365144   53352 cni.go:84] Creating CNI manager for ""
	I1001 20:03:25.365172   53352 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:03:25.365190   53352 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 20:03:25.365212   53352 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-118977 NodeName:test-preload-118977 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 20:03:25.365345   53352 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-118977"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 20:03:25.365415   53352 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1001 20:03:25.375482   53352 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 20:03:25.375570   53352 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 20:03:25.385822   53352 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1001 20:03:25.403282   53352 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 20:03:25.419951   53352 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1001 20:03:25.436848   53352 ssh_runner.go:195] Run: grep 192.168.39.195	control-plane.minikube.internal$ /etc/hosts
	I1001 20:03:25.440513   53352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:03:25.452288   53352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:03:25.564886   53352 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:03:25.581556   53352 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/test-preload-118977 for IP: 192.168.39.195
	I1001 20:03:25.581577   53352 certs.go:194] generating shared ca certs ...
	I1001 20:03:25.581593   53352 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:03:25.581747   53352 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 20:03:25.581798   53352 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 20:03:25.581810   53352 certs.go:256] generating profile certs ...
	I1001 20:03:25.581899   53352 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/test-preload-118977/client.key
	I1001 20:03:25.581957   53352 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/test-preload-118977/apiserver.key.1a470d9a
	I1001 20:03:25.581992   53352 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/test-preload-118977/proxy-client.key
	I1001 20:03:25.582093   53352 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 20:03:25.582126   53352 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 20:03:25.582136   53352 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 20:03:25.582158   53352 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 20:03:25.582183   53352 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 20:03:25.582203   53352 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 20:03:25.582253   53352 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:03:25.582879   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 20:03:25.611186   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 20:03:25.654829   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 20:03:25.688815   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 20:03:25.717528   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/test-preload-118977/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1001 20:03:25.754669   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/test-preload-118977/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 20:03:25.788893   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/test-preload-118977/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 20:03:25.820249   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/test-preload-118977/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 20:03:25.844144   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 20:03:25.868035   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 20:03:25.892331   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 20:03:25.915763   53352 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 20:03:25.932323   53352 ssh_runner.go:195] Run: openssl version
	I1001 20:03:25.937988   53352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 20:03:25.948477   53352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:03:25.952852   53352 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:03:25.952908   53352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:03:25.958941   53352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 20:03:25.970551   53352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 20:03:25.980925   53352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 20:03:25.985607   53352 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 20:03:25.985682   53352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 20:03:25.991519   53352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 20:03:26.002089   53352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 20:03:26.012697   53352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 20:03:26.017167   53352 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 20:03:26.017248   53352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 20:03:26.022809   53352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 20:03:26.033673   53352 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 20:03:26.038321   53352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 20:03:26.044347   53352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 20:03:26.050309   53352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 20:03:26.056964   53352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 20:03:26.063030   53352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 20:03:26.069306   53352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 20:03:26.075750   53352 kubeadm.go:392] StartCluster: {Name:test-preload-118977 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.24.4 ClusterName:test-preload-118977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:03:26.075823   53352 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 20:03:26.075869   53352 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 20:03:26.113678   53352 cri.go:89] found id: ""
	I1001 20:03:26.113754   53352 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 20:03:26.123858   53352 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1001 20:03:26.123879   53352 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1001 20:03:26.123934   53352 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1001 20:03:26.133784   53352 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1001 20:03:26.134189   53352 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-118977" does not appear in /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:03:26.134326   53352 kubeconfig.go:62] /home/jenkins/minikube-integration/19736-11198/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-118977" cluster setting kubeconfig missing "test-preload-118977" context setting]
	I1001 20:03:26.134586   53352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/kubeconfig: {Name:mk7ef19d546928001abf2478a3abe1d17765a591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:03:26.135158   53352 kapi.go:59] client config for test-preload-118977: &rest.Config{Host:"https://192.168.39.195:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/test-preload-118977/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/test-preload-118977/client.key", CAFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 20:03:26.135787   53352 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1001 20:03:26.145170   53352 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.195
	I1001 20:03:26.145202   53352 kubeadm.go:1160] stopping kube-system containers ...
	I1001 20:03:26.145213   53352 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1001 20:03:26.145263   53352 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 20:03:26.178496   53352 cri.go:89] found id: ""
	I1001 20:03:26.178578   53352 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1001 20:03:26.198825   53352 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:03:26.209146   53352 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:03:26.209172   53352 kubeadm.go:157] found existing configuration files:
	
	I1001 20:03:26.209217   53352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:03:26.218406   53352 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:03:26.218482   53352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:03:26.229135   53352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:03:26.238250   53352 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:03:26.238309   53352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:03:26.247752   53352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:03:26.256705   53352 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:03:26.256762   53352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:03:26.266156   53352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:03:26.275143   53352 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:03:26.275203   53352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:03:26.284628   53352 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:03:26.294585   53352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:03:26.392614   53352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:03:27.026741   53352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:03:27.292156   53352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:03:27.361924   53352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:03:27.446378   53352 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:03:27.446443   53352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:03:27.947486   53352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:03:28.446860   53352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:03:28.470640   53352 api_server.go:72] duration metric: took 1.024259096s to wait for apiserver process to appear ...
	I1001 20:03:28.470668   53352 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:03:28.470699   53352 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I1001 20:03:28.471219   53352 api_server.go:269] stopped: https://192.168.39.195:8443/healthz: Get "https://192.168.39.195:8443/healthz": dial tcp 192.168.39.195:8443: connect: connection refused
	I1001 20:03:28.970945   53352 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I1001 20:03:28.971460   53352 api_server.go:269] stopped: https://192.168.39.195:8443/healthz: Get "https://192.168.39.195:8443/healthz": dial tcp 192.168.39.195:8443: connect: connection refused
	I1001 20:03:29.471038   53352 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I1001 20:03:32.921526   53352 api_server.go:279] https://192.168.39.195:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1001 20:03:32.921567   53352 api_server.go:103] status: https://192.168.39.195:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1001 20:03:32.921595   53352 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I1001 20:03:32.937132   53352 api_server.go:279] https://192.168.39.195:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1001 20:03:32.937176   53352 api_server.go:103] status: https://192.168.39.195:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1001 20:03:32.971386   53352 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I1001 20:03:32.995261   53352 api_server.go:279] https://192.168.39.195:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 20:03:32.995313   53352 api_server.go:103] status: https://192.168.39.195:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 20:03:33.470837   53352 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I1001 20:03:33.478427   53352 api_server.go:279] https://192.168.39.195:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 20:03:33.478482   53352 api_server.go:103] status: https://192.168.39.195:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 20:03:33.971042   53352 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I1001 20:03:33.979025   53352 api_server.go:279] https://192.168.39.195:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 20:03:33.979064   53352 api_server.go:103] status: https://192.168.39.195:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 20:03:34.471638   53352 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I1001 20:03:34.482264   53352 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I1001 20:03:34.494235   53352 api_server.go:141] control plane version: v1.24.4
	I1001 20:03:34.494265   53352 api_server.go:131] duration metric: took 6.023590159s to wait for apiserver health ...
	I1001 20:03:34.494277   53352 cni.go:84] Creating CNI manager for ""
	I1001 20:03:34.494286   53352 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:03:34.495942   53352 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 20:03:34.497192   53352 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 20:03:34.523680   53352 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 20:03:34.559157   53352 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:03:34.559248   53352 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1001 20:03:34.559267   53352 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1001 20:03:34.572696   53352 system_pods.go:59] 7 kube-system pods found
	I1001 20:03:34.572731   53352 system_pods.go:61] "coredns-6d4b75cb6d-9dn6x" [c948a6ba-2bd3-45d0-8e69-abc0519b0167] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 20:03:34.572741   53352 system_pods.go:61] "etcd-test-preload-118977" [6e2846ba-34d1-4766-9ebf-fb4024767f59] Running
	I1001 20:03:34.572751   53352 system_pods.go:61] "kube-apiserver-test-preload-118977" [362eb6b2-4490-43ba-8925-dd7ae9d876b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 20:03:34.572756   53352 system_pods.go:61] "kube-controller-manager-test-preload-118977" [da48f5f6-5a0b-4d5d-bb3e-44e4b5d9a7e9] Running
	I1001 20:03:34.572765   53352 system_pods.go:61] "kube-proxy-z8dw5" [c2f552b1-85cd-4370-8849-55b4e2c44435] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1001 20:03:34.572770   53352 system_pods.go:61] "kube-scheduler-test-preload-118977" [f9b196d3-e877-491d-a006-38caa44725f5] Running
	I1001 20:03:34.572778   53352 system_pods.go:61] "storage-provisioner" [7e4a13fd-2975-48cc-a5d8-3716a427c939] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1001 20:03:34.572790   53352 system_pods.go:74] duration metric: took 13.610547ms to wait for pod list to return data ...
	I1001 20:03:34.572806   53352 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:03:34.576569   53352 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:03:34.576596   53352 node_conditions.go:123] node cpu capacity is 2
	I1001 20:03:34.576613   53352 node_conditions.go:105] duration metric: took 3.794681ms to run NodePressure ...
	I1001 20:03:34.576634   53352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:03:34.830055   53352 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1001 20:03:34.841821   53352 kubeadm.go:739] kubelet initialised
	I1001 20:03:34.841845   53352 kubeadm.go:740] duration metric: took 11.763907ms waiting for restarted kubelet to initialise ...
	I1001 20:03:34.841854   53352 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:03:34.848283   53352 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-9dn6x" in "kube-system" namespace to be "Ready" ...
	I1001 20:03:34.854688   53352 pod_ready.go:98] node "test-preload-118977" hosting pod "coredns-6d4b75cb6d-9dn6x" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-118977" has status "Ready":"False"
	I1001 20:03:34.854713   53352 pod_ready.go:82] duration metric: took 6.396742ms for pod "coredns-6d4b75cb6d-9dn6x" in "kube-system" namespace to be "Ready" ...
	E1001 20:03:34.854722   53352 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-118977" hosting pod "coredns-6d4b75cb6d-9dn6x" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-118977" has status "Ready":"False"
	I1001 20:03:34.854727   53352 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-118977" in "kube-system" namespace to be "Ready" ...
	I1001 20:03:34.860281   53352 pod_ready.go:98] node "test-preload-118977" hosting pod "etcd-test-preload-118977" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-118977" has status "Ready":"False"
	I1001 20:03:34.860306   53352 pod_ready.go:82] duration metric: took 5.570362ms for pod "etcd-test-preload-118977" in "kube-system" namespace to be "Ready" ...
	E1001 20:03:34.860314   53352 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-118977" hosting pod "etcd-test-preload-118977" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-118977" has status "Ready":"False"
	I1001 20:03:34.860323   53352 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-118977" in "kube-system" namespace to be "Ready" ...
	I1001 20:03:34.865882   53352 pod_ready.go:98] node "test-preload-118977" hosting pod "kube-apiserver-test-preload-118977" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-118977" has status "Ready":"False"
	I1001 20:03:34.865905   53352 pod_ready.go:82] duration metric: took 5.572126ms for pod "kube-apiserver-test-preload-118977" in "kube-system" namespace to be "Ready" ...
	E1001 20:03:34.865913   53352 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-118977" hosting pod "kube-apiserver-test-preload-118977" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-118977" has status "Ready":"False"
	I1001 20:03:34.865918   53352 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-118977" in "kube-system" namespace to be "Ready" ...
	I1001 20:03:34.963028   53352 pod_ready.go:98] node "test-preload-118977" hosting pod "kube-controller-manager-test-preload-118977" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-118977" has status "Ready":"False"
	I1001 20:03:34.963065   53352 pod_ready.go:82] duration metric: took 97.136822ms for pod "kube-controller-manager-test-preload-118977" in "kube-system" namespace to be "Ready" ...
	E1001 20:03:34.963079   53352 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-118977" hosting pod "kube-controller-manager-test-preload-118977" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-118977" has status "Ready":"False"
	I1001 20:03:34.963088   53352 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-z8dw5" in "kube-system" namespace to be "Ready" ...
	I1001 20:03:35.362965   53352 pod_ready.go:98] node "test-preload-118977" hosting pod "kube-proxy-z8dw5" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-118977" has status "Ready":"False"
	I1001 20:03:35.362997   53352 pod_ready.go:82] duration metric: took 399.897981ms for pod "kube-proxy-z8dw5" in "kube-system" namespace to be "Ready" ...
	E1001 20:03:35.363007   53352 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-118977" hosting pod "kube-proxy-z8dw5" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-118977" has status "Ready":"False"
	I1001 20:03:35.363012   53352 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-118977" in "kube-system" namespace to be "Ready" ...
	I1001 20:03:35.763029   53352 pod_ready.go:98] node "test-preload-118977" hosting pod "kube-scheduler-test-preload-118977" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-118977" has status "Ready":"False"
	I1001 20:03:35.763057   53352 pod_ready.go:82] duration metric: took 400.038361ms for pod "kube-scheduler-test-preload-118977" in "kube-system" namespace to be "Ready" ...
	E1001 20:03:35.763066   53352 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-118977" hosting pod "kube-scheduler-test-preload-118977" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-118977" has status "Ready":"False"
	I1001 20:03:35.763072   53352 pod_ready.go:39] duration metric: took 921.209804ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:03:35.763099   53352 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 20:03:35.774537   53352 ops.go:34] apiserver oom_adj: -16
	I1001 20:03:35.774572   53352 kubeadm.go:597] duration metric: took 9.650685546s to restartPrimaryControlPlane
	I1001 20:03:35.774583   53352 kubeadm.go:394] duration metric: took 9.698839581s to StartCluster
	I1001 20:03:35.774604   53352 settings.go:142] acquiring lock: {Name:mkeb3fe64ed992373f048aef24eb0d675bcab60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:03:35.774685   53352 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:03:35.775668   53352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/kubeconfig: {Name:mk7ef19d546928001abf2478a3abe1d17765a591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:03:35.775953   53352 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 20:03:35.776023   53352 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 20:03:35.776112   53352 addons.go:69] Setting storage-provisioner=true in profile "test-preload-118977"
	I1001 20:03:35.776133   53352 addons.go:69] Setting default-storageclass=true in profile "test-preload-118977"
	I1001 20:03:35.776151   53352 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-118977"
	I1001 20:03:35.776154   53352 config.go:182] Loaded profile config "test-preload-118977": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1001 20:03:35.776151   53352 addons.go:234] Setting addon storage-provisioner=true in "test-preload-118977"
	W1001 20:03:35.776242   53352 addons.go:243] addon storage-provisioner should already be in state true
	I1001 20:03:35.776271   53352 host.go:66] Checking if "test-preload-118977" exists ...
	I1001 20:03:35.776525   53352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:03:35.776564   53352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:03:35.776726   53352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:03:35.776767   53352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:03:35.777396   53352 out.go:177] * Verifying Kubernetes components...
	I1001 20:03:35.778520   53352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:03:35.791672   53352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42743
	I1001 20:03:35.792160   53352 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:03:35.792673   53352 main.go:141] libmachine: Using API Version  1
	I1001 20:03:35.792699   53352 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:03:35.793030   53352 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:03:35.793251   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetState
	I1001 20:03:35.795701   53352 kapi.go:59] client config for test-preload-118977: &rest.Config{Host:"https://192.168.39.195:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/test-preload-118977/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/test-preload-118977/client.key", CAFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 20:03:35.795985   53352 addons.go:234] Setting addon default-storageclass=true in "test-preload-118977"
	W1001 20:03:35.796002   53352 addons.go:243] addon default-storageclass should already be in state true
	I1001 20:03:35.796027   53352 host.go:66] Checking if "test-preload-118977" exists ...
	I1001 20:03:35.796343   53352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:03:35.796342   53352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41107
	I1001 20:03:35.796407   53352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:03:35.796900   53352 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:03:35.797396   53352 main.go:141] libmachine: Using API Version  1
	I1001 20:03:35.797413   53352 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:03:35.797745   53352 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:03:35.798481   53352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:03:35.798545   53352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:03:35.813571   53352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35645
	I1001 20:03:35.814153   53352 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:03:35.814551   53352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38113
	I1001 20:03:35.814680   53352 main.go:141] libmachine: Using API Version  1
	I1001 20:03:35.814718   53352 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:03:35.815078   53352 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:03:35.815277   53352 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:03:35.815481   53352 main.go:141] libmachine: Using API Version  1
	I1001 20:03:35.815505   53352 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:03:35.815716   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetState
	I1001 20:03:35.815812   53352 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:03:35.816406   53352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:03:35.816446   53352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:03:35.817494   53352 main.go:141] libmachine: (test-preload-118977) Calling .DriverName
	I1001 20:03:35.819080   53352 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:03:35.820143   53352 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:03:35.820159   53352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 20:03:35.820177   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHHostname
	I1001 20:03:35.823321   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:35.823793   53352 main.go:141] libmachine: (test-preload-118977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:aa:30", ip: ""} in network mk-test-preload-118977: {Iface:virbr1 ExpiryTime:2024-10-01 21:03:02 +0000 UTC Type:0 Mac:52:54:00:c9:aa:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-118977 Clientid:01:52:54:00:c9:aa:30}
	I1001 20:03:35.823817   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined IP address 192.168.39.195 and MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:35.824022   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHPort
	I1001 20:03:35.824212   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHKeyPath
	I1001 20:03:35.824423   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHUsername
	I1001 20:03:35.824574   53352 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/test-preload-118977/id_rsa Username:docker}
	I1001 20:03:35.851963   53352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34751
	I1001 20:03:35.852546   53352 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:03:35.853024   53352 main.go:141] libmachine: Using API Version  1
	I1001 20:03:35.853049   53352 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:03:35.853432   53352 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:03:35.853656   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetState
	I1001 20:03:35.855529   53352 main.go:141] libmachine: (test-preload-118977) Calling .DriverName
	I1001 20:03:35.855838   53352 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 20:03:35.855866   53352 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 20:03:35.855904   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHHostname
	I1001 20:03:35.859280   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:35.859692   53352 main.go:141] libmachine: (test-preload-118977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:aa:30", ip: ""} in network mk-test-preload-118977: {Iface:virbr1 ExpiryTime:2024-10-01 21:03:02 +0000 UTC Type:0 Mac:52:54:00:c9:aa:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-118977 Clientid:01:52:54:00:c9:aa:30}
	I1001 20:03:35.859736   53352 main.go:141] libmachine: (test-preload-118977) DBG | domain test-preload-118977 has defined IP address 192.168.39.195 and MAC address 52:54:00:c9:aa:30 in network mk-test-preload-118977
	I1001 20:03:35.859874   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHPort
	I1001 20:03:35.860037   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHKeyPath
	I1001 20:03:35.860163   53352 main.go:141] libmachine: (test-preload-118977) Calling .GetSSHUsername
	I1001 20:03:35.860303   53352 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/test-preload-118977/id_rsa Username:docker}
	I1001 20:03:35.967011   53352 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:03:35.982745   53352 node_ready.go:35] waiting up to 6m0s for node "test-preload-118977" to be "Ready" ...
	I1001 20:03:36.110163   53352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:03:36.132735   53352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 20:03:37.098567   53352 main.go:141] libmachine: Making call to close driver server
	I1001 20:03:37.098596   53352 main.go:141] libmachine: (test-preload-118977) Calling .Close
	I1001 20:03:37.098710   53352 main.go:141] libmachine: Making call to close driver server
	I1001 20:03:37.098739   53352 main.go:141] libmachine: (test-preload-118977) Calling .Close
	I1001 20:03:37.098920   53352 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:03:37.098939   53352 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:03:37.098948   53352 main.go:141] libmachine: Making call to close driver server
	I1001 20:03:37.098955   53352 main.go:141] libmachine: (test-preload-118977) Calling .Close
	I1001 20:03:37.099067   53352 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:03:37.099084   53352 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:03:37.099094   53352 main.go:141] libmachine: Making call to close driver server
	I1001 20:03:37.099103   53352 main.go:141] libmachine: (test-preload-118977) Calling .Close
	I1001 20:03:37.099153   53352 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:03:37.099164   53352 main.go:141] libmachine: (test-preload-118977) DBG | Closing plugin on server side
	I1001 20:03:37.099168   53352 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:03:37.099363   53352 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:03:37.099379   53352 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:03:37.106270   53352 main.go:141] libmachine: Making call to close driver server
	I1001 20:03:37.106288   53352 main.go:141] libmachine: (test-preload-118977) Calling .Close
	I1001 20:03:37.106600   53352 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:03:37.106617   53352 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:03:37.108314   53352 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1001 20:03:37.109293   53352 addons.go:510] duration metric: took 1.333279433s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1001 20:03:37.986345   53352 node_ready.go:53] node "test-preload-118977" has status "Ready":"False"
	I1001 20:03:40.486903   53352 node_ready.go:53] node "test-preload-118977" has status "Ready":"False"
	I1001 20:03:42.987633   53352 node_ready.go:53] node "test-preload-118977" has status "Ready":"False"
	I1001 20:03:43.486027   53352 node_ready.go:49] node "test-preload-118977" has status "Ready":"True"
	I1001 20:03:43.486052   53352 node_ready.go:38] duration metric: took 7.503274752s for node "test-preload-118977" to be "Ready" ...
	I1001 20:03:43.486060   53352 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:03:43.491519   53352 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-9dn6x" in "kube-system" namespace to be "Ready" ...
	I1001 20:03:43.499928   53352 pod_ready.go:93] pod "coredns-6d4b75cb6d-9dn6x" in "kube-system" namespace has status "Ready":"True"
	I1001 20:03:43.499956   53352 pod_ready.go:82] duration metric: took 8.406872ms for pod "coredns-6d4b75cb6d-9dn6x" in "kube-system" namespace to be "Ready" ...
	I1001 20:03:43.499968   53352 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-118977" in "kube-system" namespace to be "Ready" ...
	I1001 20:03:43.508575   53352 pod_ready.go:93] pod "etcd-test-preload-118977" in "kube-system" namespace has status "Ready":"True"
	I1001 20:03:43.508602   53352 pod_ready.go:82] duration metric: took 8.625127ms for pod "etcd-test-preload-118977" in "kube-system" namespace to be "Ready" ...
	I1001 20:03:43.508614   53352 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-118977" in "kube-system" namespace to be "Ready" ...
	I1001 20:03:43.514334   53352 pod_ready.go:93] pod "kube-apiserver-test-preload-118977" in "kube-system" namespace has status "Ready":"True"
	I1001 20:03:43.514363   53352 pod_ready.go:82] duration metric: took 5.739876ms for pod "kube-apiserver-test-preload-118977" in "kube-system" namespace to be "Ready" ...
	I1001 20:03:43.514376   53352 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-118977" in "kube-system" namespace to be "Ready" ...
	I1001 20:03:44.023013   53352 pod_ready.go:93] pod "kube-controller-manager-test-preload-118977" in "kube-system" namespace has status "Ready":"True"
	I1001 20:03:44.023050   53352 pod_ready.go:82] duration metric: took 508.6653ms for pod "kube-controller-manager-test-preload-118977" in "kube-system" namespace to be "Ready" ...
	I1001 20:03:44.023066   53352 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z8dw5" in "kube-system" namespace to be "Ready" ...
	I1001 20:03:44.290308   53352 pod_ready.go:93] pod "kube-proxy-z8dw5" in "kube-system" namespace has status "Ready":"True"
	I1001 20:03:44.290341   53352 pod_ready.go:82] duration metric: took 267.264919ms for pod "kube-proxy-z8dw5" in "kube-system" namespace to be "Ready" ...
	I1001 20:03:44.290353   53352 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-118977" in "kube-system" namespace to be "Ready" ...
	I1001 20:03:44.686204   53352 pod_ready.go:93] pod "kube-scheduler-test-preload-118977" in "kube-system" namespace has status "Ready":"True"
	I1001 20:03:44.686236   53352 pod_ready.go:82] duration metric: took 395.869998ms for pod "kube-scheduler-test-preload-118977" in "kube-system" namespace to be "Ready" ...
	I1001 20:03:44.686247   53352 pod_ready.go:39] duration metric: took 1.200177229s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:03:44.686258   53352 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:03:44.686308   53352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:03:44.701028   53352 api_server.go:72] duration metric: took 8.92503833s to wait for apiserver process to appear ...
	I1001 20:03:44.701062   53352 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:03:44.701084   53352 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I1001 20:03:44.706222   53352 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I1001 20:03:44.707238   53352 api_server.go:141] control plane version: v1.24.4
	I1001 20:03:44.707260   53352 api_server.go:131] duration metric: took 6.190835ms to wait for apiserver health ...
	I1001 20:03:44.707269   53352 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:03:44.889804   53352 system_pods.go:59] 7 kube-system pods found
	I1001 20:03:44.889842   53352 system_pods.go:61] "coredns-6d4b75cb6d-9dn6x" [c948a6ba-2bd3-45d0-8e69-abc0519b0167] Running
	I1001 20:03:44.889847   53352 system_pods.go:61] "etcd-test-preload-118977" [6e2846ba-34d1-4766-9ebf-fb4024767f59] Running
	I1001 20:03:44.889850   53352 system_pods.go:61] "kube-apiserver-test-preload-118977" [362eb6b2-4490-43ba-8925-dd7ae9d876b0] Running
	I1001 20:03:44.889854   53352 system_pods.go:61] "kube-controller-manager-test-preload-118977" [da48f5f6-5a0b-4d5d-bb3e-44e4b5d9a7e9] Running
	I1001 20:03:44.889858   53352 system_pods.go:61] "kube-proxy-z8dw5" [c2f552b1-85cd-4370-8849-55b4e2c44435] Running
	I1001 20:03:44.889861   53352 system_pods.go:61] "kube-scheduler-test-preload-118977" [f9b196d3-e877-491d-a006-38caa44725f5] Running
	I1001 20:03:44.889863   53352 system_pods.go:61] "storage-provisioner" [7e4a13fd-2975-48cc-a5d8-3716a427c939] Running
	I1001 20:03:44.889870   53352 system_pods.go:74] duration metric: took 182.595012ms to wait for pod list to return data ...
	I1001 20:03:44.889876   53352 default_sa.go:34] waiting for default service account to be created ...
	I1001 20:03:45.088652   53352 default_sa.go:45] found service account: "default"
	I1001 20:03:45.088691   53352 default_sa.go:55] duration metric: took 198.808671ms for default service account to be created ...
	I1001 20:03:45.088702   53352 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 20:03:45.288769   53352 system_pods.go:86] 7 kube-system pods found
	I1001 20:03:45.288810   53352 system_pods.go:89] "coredns-6d4b75cb6d-9dn6x" [c948a6ba-2bd3-45d0-8e69-abc0519b0167] Running
	I1001 20:03:45.288816   53352 system_pods.go:89] "etcd-test-preload-118977" [6e2846ba-34d1-4766-9ebf-fb4024767f59] Running
	I1001 20:03:45.288824   53352 system_pods.go:89] "kube-apiserver-test-preload-118977" [362eb6b2-4490-43ba-8925-dd7ae9d876b0] Running
	I1001 20:03:45.288828   53352 system_pods.go:89] "kube-controller-manager-test-preload-118977" [da48f5f6-5a0b-4d5d-bb3e-44e4b5d9a7e9] Running
	I1001 20:03:45.288832   53352 system_pods.go:89] "kube-proxy-z8dw5" [c2f552b1-85cd-4370-8849-55b4e2c44435] Running
	I1001 20:03:45.288835   53352 system_pods.go:89] "kube-scheduler-test-preload-118977" [f9b196d3-e877-491d-a006-38caa44725f5] Running
	I1001 20:03:45.288838   53352 system_pods.go:89] "storage-provisioner" [7e4a13fd-2975-48cc-a5d8-3716a427c939] Running
	I1001 20:03:45.288845   53352 system_pods.go:126] duration metric: took 200.137696ms to wait for k8s-apps to be running ...
	I1001 20:03:45.288852   53352 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 20:03:45.288898   53352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:03:45.304877   53352 system_svc.go:56] duration metric: took 16.015171ms WaitForService to wait for kubelet
	I1001 20:03:45.304916   53352 kubeadm.go:582] duration metric: took 9.528927864s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:03:45.304937   53352 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:03:45.486448   53352 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:03:45.486473   53352 node_conditions.go:123] node cpu capacity is 2
	I1001 20:03:45.486482   53352 node_conditions.go:105] duration metric: took 181.539897ms to run NodePressure ...
	I1001 20:03:45.486492   53352 start.go:241] waiting for startup goroutines ...
	I1001 20:03:45.486498   53352 start.go:246] waiting for cluster config update ...
	I1001 20:03:45.486508   53352 start.go:255] writing updated cluster config ...
	I1001 20:03:45.486754   53352 ssh_runner.go:195] Run: rm -f paused
	I1001 20:03:45.534586   53352 start.go:600] kubectl: 1.31.1, cluster: 1.24.4 (minor skew: 7)
	I1001 20:03:45.536773   53352 out.go:201] 
	W1001 20:03:45.538194   53352 out.go:270] ! /usr/local/bin/kubectl is version 1.31.1, which may have incompatibilities with Kubernetes 1.24.4.
	I1001 20:03:45.539537   53352 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1001 20:03:45.541007   53352 out.go:177] * Done! kubectl is now configured to use "test-preload-118977" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.415079712Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727813026415017597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee58bb9c-eaec-4e83-af40-968fc7eb100c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.415573620Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec06e225-4820-489e-b2a3-2c0758cbcd10 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.415625034Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec06e225-4820-489e-b2a3-2c0758cbcd10 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.415777011Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d8b0b488433f6272051254e4450106a6d444c2504fddbcdd8bdb7e46b19ee83,PodSandboxId:b1875a4f238b9781309df2b39fa28fed8e3fbe5738cd30b6c93ca420c066f327,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727813021412260672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9dn6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c948a6ba-2bd3-45d0-8e69-abc0519b0167,},Annotations:map[string]string{io.kubernetes.container.hash: 84de22d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad99ac4ef5389f4ef4e307e3d806fd3a51b6ea6db06509c3fbe061f43c6ce586,PodSandboxId:22e7fa46d3f8a4ed6d63bf77981a3dab521383be24a880c2ed4ab1f6ab6bed11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727813014394253697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8dw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c2f552b1-85cd-4370-8849-55b4e2c44435,},Annotations:map[string]string{io.kubernetes.container.hash: 45f1e27e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a25a496d0f03d1604f957183e978c0ab48f0f63deee9675893ec694a9dffe2,PodSandboxId:d5fe9610ed6e50a54a1954ca3155f6ebbb45b1495637cc7bb7f8694959ee0383,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727813014393204990,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e4
a13fd-2975-48cc-a5d8-3716a427c939,},Annotations:map[string]string{io.kubernetes.container.hash: 3e4fc7a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1aeb69b35e5edc7802e8234a19047527ac9205a44ada2e6f96c3d3ce534bdfe,PodSandboxId:fb94de3ceec244d49c31106d68280b99b7496aee366b6617eaa5252a0a13f1ee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727813008178095712,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-118977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d5becdba
e6c2d3d65e14694cb4dfaa,},Annotations:map[string]string{io.kubernetes.container.hash: cdddbf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85595c942d29bac45d52ec6bc25b8e319ce5cfbe39a57bf66b2e242fde91b01a,PodSandboxId:ee247ab9a6f252e3e10a738c021f89e86e718be2781307d2d6dbe8b88df705ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727813008162908507,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-118977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 45e8599da0aab65449f2ad25264058f7,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e0456b3927b1bbe74eede91d43ba32b5cd516b0f35e9634a2ca56dec8de70df,PodSandboxId:cb51c33d1b78f82ac70147c664d5002f1363988f2ce1f22cfffc4d8c1420197a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727813008146770256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-118977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60068486860743181098b2233ed780fb,},
Annotations:map[string]string{io.kubernetes.container.hash: 8be1269b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b81db4834b1a03b6f2bd8691a4da7825fca6ff7fd36631c6d23e2a639d6456e8,PodSandboxId:bf09e4f04cc51e39eacec3f852f2e14071e91ebebdc44fca51a0e41861ba89cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727813008083790869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-118977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9782142aa65f77b25e5c7d89bcbb8f70,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec06e225-4820-489e-b2a3-2c0758cbcd10 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.458080469Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bcd5f266-b95a-4e79-abec-9d25b1d3c45c name=/runtime.v1.RuntimeService/Version
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.458153444Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bcd5f266-b95a-4e79-abec-9d25b1d3c45c name=/runtime.v1.RuntimeService/Version
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.459801238Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e0569e3d-588d-4b4b-8e6d-e40a31b2cb8f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.460449632Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727813026460422337,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0569e3d-588d-4b4b-8e6d-e40a31b2cb8f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.461185303Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3eb0fd26-409e-4ee5-9553-b56b76a9574f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.461255669Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3eb0fd26-409e-4ee5-9553-b56b76a9574f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.461413671Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d8b0b488433f6272051254e4450106a6d444c2504fddbcdd8bdb7e46b19ee83,PodSandboxId:b1875a4f238b9781309df2b39fa28fed8e3fbe5738cd30b6c93ca420c066f327,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727813021412260672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9dn6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c948a6ba-2bd3-45d0-8e69-abc0519b0167,},Annotations:map[string]string{io.kubernetes.container.hash: 84de22d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad99ac4ef5389f4ef4e307e3d806fd3a51b6ea6db06509c3fbe061f43c6ce586,PodSandboxId:22e7fa46d3f8a4ed6d63bf77981a3dab521383be24a880c2ed4ab1f6ab6bed11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727813014394253697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8dw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c2f552b1-85cd-4370-8849-55b4e2c44435,},Annotations:map[string]string{io.kubernetes.container.hash: 45f1e27e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a25a496d0f03d1604f957183e978c0ab48f0f63deee9675893ec694a9dffe2,PodSandboxId:d5fe9610ed6e50a54a1954ca3155f6ebbb45b1495637cc7bb7f8694959ee0383,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727813014393204990,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e4
a13fd-2975-48cc-a5d8-3716a427c939,},Annotations:map[string]string{io.kubernetes.container.hash: 3e4fc7a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1aeb69b35e5edc7802e8234a19047527ac9205a44ada2e6f96c3d3ce534bdfe,PodSandboxId:fb94de3ceec244d49c31106d68280b99b7496aee366b6617eaa5252a0a13f1ee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727813008178095712,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-118977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d5becdba
e6c2d3d65e14694cb4dfaa,},Annotations:map[string]string{io.kubernetes.container.hash: cdddbf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85595c942d29bac45d52ec6bc25b8e319ce5cfbe39a57bf66b2e242fde91b01a,PodSandboxId:ee247ab9a6f252e3e10a738c021f89e86e718be2781307d2d6dbe8b88df705ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727813008162908507,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-118977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 45e8599da0aab65449f2ad25264058f7,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e0456b3927b1bbe74eede91d43ba32b5cd516b0f35e9634a2ca56dec8de70df,PodSandboxId:cb51c33d1b78f82ac70147c664d5002f1363988f2ce1f22cfffc4d8c1420197a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727813008146770256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-118977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60068486860743181098b2233ed780fb,},
Annotations:map[string]string{io.kubernetes.container.hash: 8be1269b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b81db4834b1a03b6f2bd8691a4da7825fca6ff7fd36631c6d23e2a639d6456e8,PodSandboxId:bf09e4f04cc51e39eacec3f852f2e14071e91ebebdc44fca51a0e41861ba89cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727813008083790869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-118977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9782142aa65f77b25e5c7d89bcbb8f70,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3eb0fd26-409e-4ee5-9553-b56b76a9574f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.499205022Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6f94af21-efa2-48ee-9842-b39e8a0ded7c name=/runtime.v1.RuntimeService/Version
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.499297654Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6f94af21-efa2-48ee-9842-b39e8a0ded7c name=/runtime.v1.RuntimeService/Version
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.500673227Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=914d9309-970a-4414-adf1-5c283bd69cf5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.501161553Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727813026501139686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=914d9309-970a-4414-adf1-5c283bd69cf5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.501852488Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=705a5cfc-33bc-4cb2-b3d9-3b3c1f7bc6c4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.501923125Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=705a5cfc-33bc-4cb2-b3d9-3b3c1f7bc6c4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.502146553Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d8b0b488433f6272051254e4450106a6d444c2504fddbcdd8bdb7e46b19ee83,PodSandboxId:b1875a4f238b9781309df2b39fa28fed8e3fbe5738cd30b6c93ca420c066f327,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727813021412260672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9dn6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c948a6ba-2bd3-45d0-8e69-abc0519b0167,},Annotations:map[string]string{io.kubernetes.container.hash: 84de22d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad99ac4ef5389f4ef4e307e3d806fd3a51b6ea6db06509c3fbe061f43c6ce586,PodSandboxId:22e7fa46d3f8a4ed6d63bf77981a3dab521383be24a880c2ed4ab1f6ab6bed11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727813014394253697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8dw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c2f552b1-85cd-4370-8849-55b4e2c44435,},Annotations:map[string]string{io.kubernetes.container.hash: 45f1e27e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a25a496d0f03d1604f957183e978c0ab48f0f63deee9675893ec694a9dffe2,PodSandboxId:d5fe9610ed6e50a54a1954ca3155f6ebbb45b1495637cc7bb7f8694959ee0383,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727813014393204990,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e4
a13fd-2975-48cc-a5d8-3716a427c939,},Annotations:map[string]string{io.kubernetes.container.hash: 3e4fc7a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1aeb69b35e5edc7802e8234a19047527ac9205a44ada2e6f96c3d3ce534bdfe,PodSandboxId:fb94de3ceec244d49c31106d68280b99b7496aee366b6617eaa5252a0a13f1ee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727813008178095712,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-118977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d5becdba
e6c2d3d65e14694cb4dfaa,},Annotations:map[string]string{io.kubernetes.container.hash: cdddbf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85595c942d29bac45d52ec6bc25b8e319ce5cfbe39a57bf66b2e242fde91b01a,PodSandboxId:ee247ab9a6f252e3e10a738c021f89e86e718be2781307d2d6dbe8b88df705ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727813008162908507,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-118977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 45e8599da0aab65449f2ad25264058f7,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e0456b3927b1bbe74eede91d43ba32b5cd516b0f35e9634a2ca56dec8de70df,PodSandboxId:cb51c33d1b78f82ac70147c664d5002f1363988f2ce1f22cfffc4d8c1420197a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727813008146770256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-118977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60068486860743181098b2233ed780fb,},
Annotations:map[string]string{io.kubernetes.container.hash: 8be1269b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b81db4834b1a03b6f2bd8691a4da7825fca6ff7fd36631c6d23e2a639d6456e8,PodSandboxId:bf09e4f04cc51e39eacec3f852f2e14071e91ebebdc44fca51a0e41861ba89cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727813008083790869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-118977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9782142aa65f77b25e5c7d89bcbb8f70,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=705a5cfc-33bc-4cb2-b3d9-3b3c1f7bc6c4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.535590751Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e114ab28-45c6-4447-891a-b5c5b6f7c41e name=/runtime.v1.RuntimeService/Version
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.535674085Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e114ab28-45c6-4447-891a-b5c5b6f7c41e name=/runtime.v1.RuntimeService/Version
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.536671420Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=98a00b6b-486f-4dfa-a5aa-83749e185dba name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.537255055Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727813026537227234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98a00b6b-486f-4dfa-a5aa-83749e185dba name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.538084642Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31626708-1130-4e98-bf42-3d64b7b4feee name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.538145968Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31626708-1130-4e98-bf42-3d64b7b4feee name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:03:46 test-preload-118977 crio[666]: time="2024-10-01 20:03:46.538294448Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d8b0b488433f6272051254e4450106a6d444c2504fddbcdd8bdb7e46b19ee83,PodSandboxId:b1875a4f238b9781309df2b39fa28fed8e3fbe5738cd30b6c93ca420c066f327,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727813021412260672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9dn6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c948a6ba-2bd3-45d0-8e69-abc0519b0167,},Annotations:map[string]string{io.kubernetes.container.hash: 84de22d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad99ac4ef5389f4ef4e307e3d806fd3a51b6ea6db06509c3fbe061f43c6ce586,PodSandboxId:22e7fa46d3f8a4ed6d63bf77981a3dab521383be24a880c2ed4ab1f6ab6bed11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727813014394253697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8dw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c2f552b1-85cd-4370-8849-55b4e2c44435,},Annotations:map[string]string{io.kubernetes.container.hash: 45f1e27e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a25a496d0f03d1604f957183e978c0ab48f0f63deee9675893ec694a9dffe2,PodSandboxId:d5fe9610ed6e50a54a1954ca3155f6ebbb45b1495637cc7bb7f8694959ee0383,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727813014393204990,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e4
a13fd-2975-48cc-a5d8-3716a427c939,},Annotations:map[string]string{io.kubernetes.container.hash: 3e4fc7a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1aeb69b35e5edc7802e8234a19047527ac9205a44ada2e6f96c3d3ce534bdfe,PodSandboxId:fb94de3ceec244d49c31106d68280b99b7496aee366b6617eaa5252a0a13f1ee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727813008178095712,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-118977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d5becdba
e6c2d3d65e14694cb4dfaa,},Annotations:map[string]string{io.kubernetes.container.hash: cdddbf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85595c942d29bac45d52ec6bc25b8e319ce5cfbe39a57bf66b2e242fde91b01a,PodSandboxId:ee247ab9a6f252e3e10a738c021f89e86e718be2781307d2d6dbe8b88df705ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727813008162908507,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-118977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 45e8599da0aab65449f2ad25264058f7,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e0456b3927b1bbe74eede91d43ba32b5cd516b0f35e9634a2ca56dec8de70df,PodSandboxId:cb51c33d1b78f82ac70147c664d5002f1363988f2ce1f22cfffc4d8c1420197a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727813008146770256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-118977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60068486860743181098b2233ed780fb,},
Annotations:map[string]string{io.kubernetes.container.hash: 8be1269b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b81db4834b1a03b6f2bd8691a4da7825fca6ff7fd36631c6d23e2a639d6456e8,PodSandboxId:bf09e4f04cc51e39eacec3f852f2e14071e91ebebdc44fca51a0e41861ba89cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727813008083790869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-118977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9782142aa65f77b25e5c7d89bcbb8f70,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31626708-1130-4e98-bf42-3d64b7b4feee name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7d8b0b488433f       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   5 seconds ago       Running             coredns                   1                   b1875a4f238b9       coredns-6d4b75cb6d-9dn6x
	ad99ac4ef5389       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   12 seconds ago      Running             kube-proxy                1                   22e7fa46d3f8a       kube-proxy-z8dw5
	b5a25a496d0f0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       1                   d5fe9610ed6e5       storage-provisioner
	d1aeb69b35e5e       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   18 seconds ago      Running             kube-apiserver            1                   fb94de3ceec24       kube-apiserver-test-preload-118977
	85595c942d29b       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   18 seconds ago      Running             kube-controller-manager   1                   ee247ab9a6f25       kube-controller-manager-test-preload-118977
	3e0456b3927b1       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   18 seconds ago      Running             etcd                      1                   cb51c33d1b78f       etcd-test-preload-118977
	b81db4834b1a0       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   18 seconds ago      Running             kube-scheduler            1                   bf09e4f04cc51       kube-scheduler-test-preload-118977
	
	
	==> coredns [7d8b0b488433f6272051254e4450106a6d444c2504fddbcdd8bdb7e46b19ee83] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:58583 - 25546 "HINFO IN 1318045418279650826.5736809773029872785. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019403454s
	
	
	==> describe nodes <==
	Name:               test-preload-118977
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-118977
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=test-preload-118977
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T20_02_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 20:01:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-118977
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 20:03:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 20:03:43 +0000   Tue, 01 Oct 2024 20:01:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 20:03:43 +0000   Tue, 01 Oct 2024 20:01:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 20:03:43 +0000   Tue, 01 Oct 2024 20:01:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 20:03:43 +0000   Tue, 01 Oct 2024 20:03:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    test-preload-118977
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 13b0520a58614965af51b01d9cafa1a3
	  System UUID:                13b0520a-5861-4965-af51-b01d9cafa1a3
	  Boot ID:                    040d7eb0-5b54-41a0-b26d-54a0787a3d26
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-9dn6x                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     92s
	  kube-system                 etcd-test-preload-118977                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         104s
	  kube-system                 kube-apiserver-test-preload-118977             250m (12%)    0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-controller-manager-test-preload-118977    200m (10%)    0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-proxy-z8dw5                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-scheduler-test-preload-118977             100m (5%)     0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11s                kube-proxy       
	  Normal  Starting                 90s                kube-proxy       
	  Normal  Starting                 105s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s               kubelet          Node test-preload-118977 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s               kubelet          Node test-preload-118977 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s               kubelet          Node test-preload-118977 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  104s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                94s                kubelet          Node test-preload-118977 status is now: NodeReady
	  Normal  RegisteredNode           92s                node-controller  Node test-preload-118977 event: Registered Node test-preload-118977 in Controller
	  Normal  Starting                 19s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node test-preload-118977 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node test-preload-118977 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 19s)  kubelet          Node test-preload-118977 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node test-preload-118977 event: Registered Node test-preload-118977 in Controller
	
	
	==> dmesg <==
	[Oct 1 20:02] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050944] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036738] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.753975] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Oct 1 20:03] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.448932] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.146178] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.062014] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060032] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.161058] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.127686] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.258464] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[ +12.599099] systemd-fstab-generator[990]: Ignoring "noauto" option for root device
	[  +0.061086] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.656809] systemd-fstab-generator[1119]: Ignoring "noauto" option for root device
	[  +7.006786] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.638916] systemd-fstab-generator[1752]: Ignoring "noauto" option for root device
	[  +5.404732] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [3e0456b3927b1bbe74eede91d43ba32b5cd516b0f35e9634a2ca56dec8de70df] <==
	{"level":"info","ts":"2024-10-01T20:03:28.583Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"324857e3fe6e5c62","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-10-01T20:03:28.583Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-10-01T20:03:28.590Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 switched to configuration voters=(3623242536957402210)"}
	{"level":"info","ts":"2024-10-01T20:03:28.590Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e260bcd32c6c8b35","local-member-id":"324857e3fe6e5c62","added-peer-id":"324857e3fe6e5c62","added-peer-peer-urls":["https://192.168.39.195:2380"]}
	{"level":"info","ts":"2024-10-01T20:03:28.590Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e260bcd32c6c8b35","local-member-id":"324857e3fe6e5c62","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T20:03:28.590Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T20:03:28.604Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.195:2380"}
	{"level":"info","ts":"2024-10-01T20:03:28.604Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.195:2380"}
	{"level":"info","ts":"2024-10-01T20:03:28.604Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-01T20:03:28.605Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-01T20:03:28.605Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"324857e3fe6e5c62","initial-advertise-peer-urls":["https://192.168.39.195:2380"],"listen-peer-urls":["https://192.168.39.195:2380"],"advertise-client-urls":["https://192.168.39.195:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.195:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-01T20:03:30.436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-01T20:03:30.436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-01T20:03:30.436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 received MsgPreVoteResp from 324857e3fe6e5c62 at term 2"}
	{"level":"info","ts":"2024-10-01T20:03:30.436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 became candidate at term 3"}
	{"level":"info","ts":"2024-10-01T20:03:30.436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 received MsgVoteResp from 324857e3fe6e5c62 at term 3"}
	{"level":"info","ts":"2024-10-01T20:03:30.436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 became leader at term 3"}
	{"level":"info","ts":"2024-10-01T20:03:30.436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 324857e3fe6e5c62 elected leader 324857e3fe6e5c62 at term 3"}
	{"level":"info","ts":"2024-10-01T20:03:30.440Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"324857e3fe6e5c62","local-member-attributes":"{Name:test-preload-118977 ClientURLs:[https://192.168.39.195:2379]}","request-path":"/0/members/324857e3fe6e5c62/attributes","cluster-id":"e260bcd32c6c8b35","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-01T20:03:30.440Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T20:03:30.441Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-01T20:03:30.441Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-01T20:03:30.441Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T20:03:30.442Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-01T20:03:30.442Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.195:2379"}
	
	
	==> kernel <==
	 20:03:46 up 0 min,  0 users,  load average: 0.67, 0.18, 0.06
	Linux test-preload-118977 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d1aeb69b35e5edc7802e8234a19047527ac9205a44ada2e6f96c3d3ce534bdfe] <==
	I1001 20:03:32.829632       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1001 20:03:32.829672       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1001 20:03:32.864528       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1001 20:03:32.864557       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I1001 20:03:32.864604       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1001 20:03:32.876305       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E1001 20:03:32.950934       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1001 20:03:32.953581       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1001 20:03:32.967816       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1001 20:03:33.002192       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1001 20:03:33.009545       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1001 20:03:33.018549       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1001 20:03:33.020018       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1001 20:03:33.020640       1 cache.go:39] Caches are synced for autoregister controller
	I1001 20:03:33.021748       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1001 20:03:33.499814       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1001 20:03:33.823199       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1001 20:03:34.670081       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1001 20:03:34.687834       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1001 20:03:34.743481       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1001 20:03:34.764529       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1001 20:03:34.773324       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1001 20:03:34.800996       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1001 20:03:45.030320       1 controller.go:611] quota admission added evaluator for: endpoints
	I1001 20:03:45.079991       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [85595c942d29bac45d52ec6bc25b8e319ce5cfbe39a57bf66b2e242fde91b01a] <==
	W1001 20:03:44.888788       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="test-preload-118977" does not exist
	I1001 20:03:44.889001       1 shared_informer.go:262] Caches are synced for GC
	I1001 20:03:44.897154       1 shared_informer.go:262] Caches are synced for TTL
	I1001 20:03:44.902290       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1001 20:03:44.905216       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1001 20:03:44.913016       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1001 20:03:44.916480       1 shared_informer.go:262] Caches are synced for daemon sets
	I1001 20:03:44.927421       1 shared_informer.go:262] Caches are synced for taint
	I1001 20:03:44.927572       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1001 20:03:44.927614       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W1001 20:03:44.927862       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-118977. Assuming now as a timestamp.
	I1001 20:03:44.927957       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1001 20:03:44.928494       1 event.go:294] "Event occurred" object="test-preload-118977" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-118977 event: Registered Node test-preload-118977 in Controller"
	I1001 20:03:44.942826       1 shared_informer.go:262] Caches are synced for node
	I1001 20:03:44.942922       1 range_allocator.go:173] Starting range CIDR allocator
	I1001 20:03:44.942929       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1001 20:03:44.942939       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1001 20:03:44.951435       1 shared_informer.go:262] Caches are synced for persistent volume
	I1001 20:03:44.985674       1 shared_informer.go:262] Caches are synced for attach detach
	I1001 20:03:45.037058       1 event.go:294] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
	I1001 20:03:45.080849       1 shared_informer.go:262] Caches are synced for resource quota
	I1001 20:03:45.108127       1 shared_informer.go:262] Caches are synced for resource quota
	I1001 20:03:45.520407       1 shared_informer.go:262] Caches are synced for garbage collector
	I1001 20:03:45.525822       1 shared_informer.go:262] Caches are synced for garbage collector
	I1001 20:03:45.525847       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [ad99ac4ef5389f4ef4e307e3d806fd3a51b6ea6db06509c3fbe061f43c6ce586] <==
	I1001 20:03:34.723593       1 node.go:163] Successfully retrieved node IP: 192.168.39.195
	I1001 20:03:34.723756       1 server_others.go:138] "Detected node IP" address="192.168.39.195"
	I1001 20:03:34.723831       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1001 20:03:34.784243       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1001 20:03:34.784320       1 server_others.go:206] "Using iptables Proxier"
	I1001 20:03:34.784944       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1001 20:03:34.785994       1 server.go:661] "Version info" version="v1.24.4"
	I1001 20:03:34.786097       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 20:03:34.791514       1 config.go:444] "Starting node config controller"
	I1001 20:03:34.792437       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1001 20:03:34.794200       1 config.go:317] "Starting service config controller"
	I1001 20:03:34.794314       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1001 20:03:34.794402       1 config.go:226] "Starting endpoint slice config controller"
	I1001 20:03:34.794454       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1001 20:03:34.893688       1 shared_informer.go:262] Caches are synced for node config
	I1001 20:03:34.894904       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1001 20:03:34.894964       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [b81db4834b1a03b6f2bd8691a4da7825fca6ff7fd36631c6d23e2a639d6456e8] <==
	I1001 20:03:28.784666       1 serving.go:348] Generated self-signed cert in-memory
	W1001 20:03:32.955105       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1001 20:03:32.955176       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1001 20:03:32.955189       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1001 20:03:32.955197       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1001 20:03:32.990193       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1001 20:03:32.990301       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 20:03:32.995245       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1001 20:03:32.998341       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 20:03:32.998526       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1001 20:03:32.998673       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1001 20:03:33.099306       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 01 20:03:33 test-preload-118977 kubelet[1126]: I1001 20:03:33.019672    1126 setters.go:532] "Node became not ready" node="test-preload-118977" condition={Type:Ready Status:False LastHeartbeatTime:2024-10-01 20:03:33.019610208 +0000 UTC m=+5.764687215 LastTransitionTime:2024-10-01 20:03:33.019610208 +0000 UTC m=+5.764687215 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Oct 01 20:03:33 test-preload-118977 kubelet[1126]: I1001 20:03:33.365190    1126 apiserver.go:52] "Watching apiserver"
	Oct 01 20:03:33 test-preload-118977 kubelet[1126]: I1001 20:03:33.368898    1126 topology_manager.go:200] "Topology Admit Handler"
	Oct 01 20:03:33 test-preload-118977 kubelet[1126]: I1001 20:03:33.368998    1126 topology_manager.go:200] "Topology Admit Handler"
	Oct 01 20:03:33 test-preload-118977 kubelet[1126]: I1001 20:03:33.369073    1126 topology_manager.go:200] "Topology Admit Handler"
	Oct 01 20:03:33 test-preload-118977 kubelet[1126]: E1001 20:03:33.371226    1126 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-9dn6x" podUID=c948a6ba-2bd3-45d0-8e69-abc0519b0167
	Oct 01 20:03:33 test-preload-118977 kubelet[1126]: I1001 20:03:33.446236    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c2f552b1-85cd-4370-8849-55b4e2c44435-kube-proxy\") pod \"kube-proxy-z8dw5\" (UID: \"c2f552b1-85cd-4370-8849-55b4e2c44435\") " pod="kube-system/kube-proxy-z8dw5"
	Oct 01 20:03:33 test-preload-118977 kubelet[1126]: I1001 20:03:33.446646    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2f552b1-85cd-4370-8849-55b4e2c44435-lib-modules\") pod \"kube-proxy-z8dw5\" (UID: \"c2f552b1-85cd-4370-8849-55b4e2c44435\") " pod="kube-system/kube-proxy-z8dw5"
	Oct 01 20:03:33 test-preload-118977 kubelet[1126]: I1001 20:03:33.446681    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffc2k\" (UniqueName: \"kubernetes.io/projected/c2f552b1-85cd-4370-8849-55b4e2c44435-kube-api-access-ffc2k\") pod \"kube-proxy-z8dw5\" (UID: \"c2f552b1-85cd-4370-8849-55b4e2c44435\") " pod="kube-system/kube-proxy-z8dw5"
	Oct 01 20:03:33 test-preload-118977 kubelet[1126]: I1001 20:03:33.446717    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7e4a13fd-2975-48cc-a5d8-3716a427c939-tmp\") pod \"storage-provisioner\" (UID: \"7e4a13fd-2975-48cc-a5d8-3716a427c939\") " pod="kube-system/storage-provisioner"
	Oct 01 20:03:33 test-preload-118977 kubelet[1126]: I1001 20:03:33.446736    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg2bm\" (UniqueName: \"kubernetes.io/projected/7e4a13fd-2975-48cc-a5d8-3716a427c939-kube-api-access-sg2bm\") pod \"storage-provisioner\" (UID: \"7e4a13fd-2975-48cc-a5d8-3716a427c939\") " pod="kube-system/storage-provisioner"
	Oct 01 20:03:33 test-preload-118977 kubelet[1126]: I1001 20:03:33.446759    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2f552b1-85cd-4370-8849-55b4e2c44435-xtables-lock\") pod \"kube-proxy-z8dw5\" (UID: \"c2f552b1-85cd-4370-8849-55b4e2c44435\") " pod="kube-system/kube-proxy-z8dw5"
	Oct 01 20:03:33 test-preload-118977 kubelet[1126]: I1001 20:03:33.446779    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c948a6ba-2bd3-45d0-8e69-abc0519b0167-config-volume\") pod \"coredns-6d4b75cb6d-9dn6x\" (UID: \"c948a6ba-2bd3-45d0-8e69-abc0519b0167\") " pod="kube-system/coredns-6d4b75cb6d-9dn6x"
	Oct 01 20:03:33 test-preload-118977 kubelet[1126]: I1001 20:03:33.446797    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj8dg\" (UniqueName: \"kubernetes.io/projected/c948a6ba-2bd3-45d0-8e69-abc0519b0167-kube-api-access-bj8dg\") pod \"coredns-6d4b75cb6d-9dn6x\" (UID: \"c948a6ba-2bd3-45d0-8e69-abc0519b0167\") " pod="kube-system/coredns-6d4b75cb6d-9dn6x"
	Oct 01 20:03:33 test-preload-118977 kubelet[1126]: I1001 20:03:33.446813    1126 reconciler.go:159] "Reconciler: start to sync state"
	Oct 01 20:03:33 test-preload-118977 kubelet[1126]: E1001 20:03:33.551093    1126 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 01 20:03:33 test-preload-118977 kubelet[1126]: E1001 20:03:33.551362    1126 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/c948a6ba-2bd3-45d0-8e69-abc0519b0167-config-volume podName:c948a6ba-2bd3-45d0-8e69-abc0519b0167 nodeName:}" failed. No retries permitted until 2024-10-01 20:03:34.051290498 +0000 UTC m=+6.796367506 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c948a6ba-2bd3-45d0-8e69-abc0519b0167-config-volume") pod "coredns-6d4b75cb6d-9dn6x" (UID: "c948a6ba-2bd3-45d0-8e69-abc0519b0167") : object "kube-system"/"coredns" not registered
	Oct 01 20:03:34 test-preload-118977 kubelet[1126]: E1001 20:03:34.056191    1126 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 01 20:03:34 test-preload-118977 kubelet[1126]: E1001 20:03:34.056282    1126 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/c948a6ba-2bd3-45d0-8e69-abc0519b0167-config-volume podName:c948a6ba-2bd3-45d0-8e69-abc0519b0167 nodeName:}" failed. No retries permitted until 2024-10-01 20:03:35.056264669 +0000 UTC m=+7.801341666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c948a6ba-2bd3-45d0-8e69-abc0519b0167-config-volume") pod "coredns-6d4b75cb6d-9dn6x" (UID: "c948a6ba-2bd3-45d0-8e69-abc0519b0167") : object "kube-system"/"coredns" not registered
	Oct 01 20:03:34 test-preload-118977 kubelet[1126]: E1001 20:03:34.508521    1126 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-9dn6x" podUID=c948a6ba-2bd3-45d0-8e69-abc0519b0167
	Oct 01 20:03:35 test-preload-118977 kubelet[1126]: E1001 20:03:35.061691    1126 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 01 20:03:35 test-preload-118977 kubelet[1126]: E1001 20:03:35.061805    1126 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/c948a6ba-2bd3-45d0-8e69-abc0519b0167-config-volume podName:c948a6ba-2bd3-45d0-8e69-abc0519b0167 nodeName:}" failed. No retries permitted until 2024-10-01 20:03:37.061787251 +0000 UTC m=+9.806864247 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c948a6ba-2bd3-45d0-8e69-abc0519b0167-config-volume") pod "coredns-6d4b75cb6d-9dn6x" (UID: "c948a6ba-2bd3-45d0-8e69-abc0519b0167") : object "kube-system"/"coredns" not registered
	Oct 01 20:03:36 test-preload-118977 kubelet[1126]: E1001 20:03:36.508180    1126 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-9dn6x" podUID=c948a6ba-2bd3-45d0-8e69-abc0519b0167
	Oct 01 20:03:37 test-preload-118977 kubelet[1126]: E1001 20:03:37.079577    1126 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 01 20:03:37 test-preload-118977 kubelet[1126]: E1001 20:03:37.079658    1126 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/c948a6ba-2bd3-45d0-8e69-abc0519b0167-config-volume podName:c948a6ba-2bd3-45d0-8e69-abc0519b0167 nodeName:}" failed. No retries permitted until 2024-10-01 20:03:41.079643494 +0000 UTC m=+13.824720501 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c948a6ba-2bd3-45d0-8e69-abc0519b0167-config-volume") pod "coredns-6d4b75cb6d-9dn6x" (UID: "c948a6ba-2bd3-45d0-8e69-abc0519b0167") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [b5a25a496d0f03d1604f957183e978c0ab48f0f63deee9675893ec694a9dffe2] <==
	I1001 20:03:34.510490       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-118977 -n test-preload-118977
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-118977 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-118977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-118977
--- FAIL: TestPreload (179.21s)

                                                
                                    
x
+
TestKubernetesUpgrade (403.77s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-869396 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-869396 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m51.293351283s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-869396] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-869396" primary control-plane node in "kubernetes-upgrade-869396" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 20:09:34.659624   60814 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:09:34.659884   60814 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:09:34.659898   60814 out.go:358] Setting ErrFile to fd 2...
	I1001 20:09:34.659902   60814 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:09:34.660088   60814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 20:09:34.660893   60814 out.go:352] Setting JSON to false
	I1001 20:09:34.661839   60814 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6717,"bootTime":1727806658,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 20:09:34.661954   60814 start.go:139] virtualization: kvm guest
	I1001 20:09:34.664177   60814 out.go:177] * [kubernetes-upgrade-869396] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 20:09:34.665420   60814 notify.go:220] Checking for updates...
	I1001 20:09:34.665435   60814 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 20:09:34.666732   60814 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 20:09:34.667847   60814 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:09:34.669104   60814 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:09:34.670322   60814 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 20:09:34.671329   60814 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 20:09:34.672886   60814 config.go:182] Loaded profile config "NoKubernetes-791490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1001 20:09:34.673035   60814 config.go:182] Loaded profile config "cert-expiration-402897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:09:34.673168   60814 config.go:182] Loaded profile config "cert-options-432128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:09:34.673312   60814 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 20:09:34.713338   60814 out.go:177] * Using the kvm2 driver based on user configuration
	I1001 20:09:34.714327   60814 start.go:297] selected driver: kvm2
	I1001 20:09:34.714342   60814 start.go:901] validating driver "kvm2" against <nil>
	I1001 20:09:34.714371   60814 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 20:09:34.715434   60814 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:09:34.715539   60814 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 20:09:34.732471   60814 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 20:09:34.732526   60814 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 20:09:34.732776   60814 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 20:09:34.732800   60814 cni.go:84] Creating CNI manager for ""
	I1001 20:09:34.732838   60814 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:09:34.732853   60814 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 20:09:34.732897   60814 start.go:340] cluster config:
	{Name:kubernetes-upgrade-869396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-869396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:09:34.733005   60814 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:09:34.734386   60814 out.go:177] * Starting "kubernetes-upgrade-869396" primary control-plane node in "kubernetes-upgrade-869396" cluster
	I1001 20:09:34.735442   60814 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1001 20:09:34.735502   60814 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1001 20:09:34.735518   60814 cache.go:56] Caching tarball of preloaded images
	I1001 20:09:34.735631   60814 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 20:09:34.735647   60814 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1001 20:09:34.735752   60814 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/config.json ...
	I1001 20:09:34.735790   60814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/config.json: {Name:mk2093b13dd94a86f283df9995c5c53906768cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:09:34.735967   60814 start.go:360] acquireMachinesLock for kubernetes-upgrade-869396: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 20:09:57.509105   60814 start.go:364] duration metric: took 22.773054094s to acquireMachinesLock for "kubernetes-upgrade-869396"
	I1001 20:09:57.509176   60814 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-869396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-869396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 20:09:57.509302   60814 start.go:125] createHost starting for "" (driver="kvm2")
	I1001 20:09:57.511367   60814 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 20:09:57.511564   60814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:09:57.511614   60814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:09:57.528059   60814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35715
	I1001 20:09:57.528576   60814 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:09:57.529218   60814 main.go:141] libmachine: Using API Version  1
	I1001 20:09:57.529249   60814 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:09:57.529560   60814 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:09:57.529746   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetMachineName
	I1001 20:09:57.529886   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .DriverName
	I1001 20:09:57.530057   60814 start.go:159] libmachine.API.Create for "kubernetes-upgrade-869396" (driver="kvm2")
	I1001 20:09:57.530085   60814 client.go:168] LocalClient.Create starting
	I1001 20:09:57.530119   60814 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem
	I1001 20:09:57.530163   60814 main.go:141] libmachine: Decoding PEM data...
	I1001 20:09:57.530184   60814 main.go:141] libmachine: Parsing certificate...
	I1001 20:09:57.530248   60814 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem
	I1001 20:09:57.530271   60814 main.go:141] libmachine: Decoding PEM data...
	I1001 20:09:57.530293   60814 main.go:141] libmachine: Parsing certificate...
	I1001 20:09:57.530320   60814 main.go:141] libmachine: Running pre-create checks...
	I1001 20:09:57.530338   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .PreCreateCheck
	I1001 20:09:57.530753   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetConfigRaw
	I1001 20:09:57.531191   60814 main.go:141] libmachine: Creating machine...
	I1001 20:09:57.531208   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .Create
	I1001 20:09:57.531352   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Creating KVM machine...
	I1001 20:09:57.532615   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | found existing default KVM network
	I1001 20:09:57.533675   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | I1001 20:09:57.533530   61048 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ef:4b:e9} reservation:<nil>}
	I1001 20:09:57.534533   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | I1001 20:09:57.534444   61048 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000221ce0}
	I1001 20:09:57.534583   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | created network xml: 
	I1001 20:09:57.534596   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | <network>
	I1001 20:09:57.534609   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG |   <name>mk-kubernetes-upgrade-869396</name>
	I1001 20:09:57.534628   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG |   <dns enable='no'/>
	I1001 20:09:57.534637   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG |   
	I1001 20:09:57.534648   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1001 20:09:57.534658   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG |     <dhcp>
	I1001 20:09:57.534667   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1001 20:09:57.534677   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG |     </dhcp>
	I1001 20:09:57.534686   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG |   </ip>
	I1001 20:09:57.534701   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG |   
	I1001 20:09:57.534726   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | </network>
	I1001 20:09:57.534750   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | 
	I1001 20:09:57.540094   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | trying to create private KVM network mk-kubernetes-upgrade-869396 192.168.50.0/24...
	I1001 20:09:57.610213   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | private KVM network mk-kubernetes-upgrade-869396 192.168.50.0/24 created
	I1001 20:09:57.610302   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | I1001 20:09:57.610184   61048 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:09:57.610328   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Setting up store path in /home/jenkins/minikube-integration/19736-11198/.minikube/machines/kubernetes-upgrade-869396 ...
	I1001 20:09:57.610348   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Building disk image from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 20:09:57.610364   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Downloading /home/jenkins/minikube-integration/19736-11198/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 20:09:57.855873   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | I1001 20:09:57.855714   61048 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/kubernetes-upgrade-869396/id_rsa...
	I1001 20:09:57.996547   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | I1001 20:09:57.996338   61048 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/kubernetes-upgrade-869396/kubernetes-upgrade-869396.rawdisk...
	I1001 20:09:57.996592   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | Writing magic tar header
	I1001 20:09:57.996611   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/kubernetes-upgrade-869396 (perms=drwx------)
	I1001 20:09:57.996632   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines (perms=drwxr-xr-x)
	I1001 20:09:57.996643   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube (perms=drwxr-xr-x)
	I1001 20:09:57.996663   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198 (perms=drwxrwxr-x)
	I1001 20:09:57.996680   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | Writing SSH key tar header
	I1001 20:09:57.996693   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 20:09:57.996716   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 20:09:57.996738   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | I1001 20:09:57.996495   61048 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/kubernetes-upgrade-869396 ...
	I1001 20:09:57.996750   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Creating domain...
	I1001 20:09:57.996790   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/kubernetes-upgrade-869396
	I1001 20:09:57.996827   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines
	I1001 20:09:57.996861   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:09:57.996891   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198
	I1001 20:09:57.996913   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 20:09:57.996927   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | Checking permissions on dir: /home/jenkins
	I1001 20:09:57.996944   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | Checking permissions on dir: /home
	I1001 20:09:57.996954   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | Skipping /home - not owner
	I1001 20:09:57.998155   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) define libvirt domain using xml: 
	I1001 20:09:57.998170   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) <domain type='kvm'>
	I1001 20:09:57.998181   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)   <name>kubernetes-upgrade-869396</name>
	I1001 20:09:57.998187   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)   <memory unit='MiB'>2200</memory>
	I1001 20:09:57.998221   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)   <vcpu>2</vcpu>
	I1001 20:09:57.998252   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)   <features>
	I1001 20:09:57.998263   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     <acpi/>
	I1001 20:09:57.998280   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     <apic/>
	I1001 20:09:57.998305   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     <pae/>
	I1001 20:09:57.998321   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     
	I1001 20:09:57.998327   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)   </features>
	I1001 20:09:57.998334   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)   <cpu mode='host-passthrough'>
	I1001 20:09:57.998340   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)   
	I1001 20:09:57.998347   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)   </cpu>
	I1001 20:09:57.998353   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)   <os>
	I1001 20:09:57.998375   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     <type>hvm</type>
	I1001 20:09:57.998384   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     <boot dev='cdrom'/>
	I1001 20:09:57.998392   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     <boot dev='hd'/>
	I1001 20:09:57.998398   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     <bootmenu enable='no'/>
	I1001 20:09:57.998405   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)   </os>
	I1001 20:09:57.998422   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)   <devices>
	I1001 20:09:57.998428   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     <disk type='file' device='cdrom'>
	I1001 20:09:57.998467   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/kubernetes-upgrade-869396/boot2docker.iso'/>
	I1001 20:09:57.998505   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)       <target dev='hdc' bus='scsi'/>
	I1001 20:09:57.998514   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)       <readonly/>
	I1001 20:09:57.998530   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     </disk>
	I1001 20:09:57.998539   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     <disk type='file' device='disk'>
	I1001 20:09:57.998551   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 20:09:57.998569   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/kubernetes-upgrade-869396/kubernetes-upgrade-869396.rawdisk'/>
	I1001 20:09:57.998581   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)       <target dev='hda' bus='virtio'/>
	I1001 20:09:57.998593   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     </disk>
	I1001 20:09:57.998604   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     <interface type='network'>
	I1001 20:09:57.998618   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)       <source network='mk-kubernetes-upgrade-869396'/>
	I1001 20:09:57.998632   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)       <model type='virtio'/>
	I1001 20:09:57.998659   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     </interface>
	I1001 20:09:57.998675   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     <interface type='network'>
	I1001 20:09:57.998699   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)       <source network='default'/>
	I1001 20:09:57.998733   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)       <model type='virtio'/>
	I1001 20:09:57.998746   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     </interface>
	I1001 20:09:57.998754   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     <serial type='pty'>
	I1001 20:09:57.998764   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)       <target port='0'/>
	I1001 20:09:57.998773   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     </serial>
	I1001 20:09:57.998780   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     <console type='pty'>
	I1001 20:09:57.998790   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)       <target type='serial' port='0'/>
	I1001 20:09:57.998798   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     </console>
	I1001 20:09:57.998813   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     <rng model='virtio'>
	I1001 20:09:57.998824   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)       <backend model='random'>/dev/random</backend>
	I1001 20:09:57.998834   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     </rng>
	I1001 20:09:57.998845   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     
	I1001 20:09:57.998855   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)     
	I1001 20:09:57.998863   60814 main.go:141] libmachine: (kubernetes-upgrade-869396)   </devices>
	I1001 20:09:57.998873   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) </domain>
	I1001 20:09:57.998881   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) 
	I1001 20:09:58.003380   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:47:3b:9c in network default
	I1001 20:09:58.004144   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Ensuring networks are active...
	I1001 20:09:58.004179   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:09:58.004986   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Ensuring network default is active
	I1001 20:09:58.005288   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Ensuring network mk-kubernetes-upgrade-869396 is active
	I1001 20:09:58.005777   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Getting domain xml...
	I1001 20:09:58.006567   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Creating domain...
	I1001 20:09:59.392128   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Waiting to get IP...
	I1001 20:09:59.393098   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:09:59.393667   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | unable to find current IP address of domain kubernetes-upgrade-869396 in network mk-kubernetes-upgrade-869396
	I1001 20:09:59.393695   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | I1001 20:09:59.393650   61048 retry.go:31] will retry after 267.457049ms: waiting for machine to come up
	I1001 20:09:59.663473   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:09:59.664085   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | unable to find current IP address of domain kubernetes-upgrade-869396 in network mk-kubernetes-upgrade-869396
	I1001 20:09:59.664118   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | I1001 20:09:59.664041   61048 retry.go:31] will retry after 251.347987ms: waiting for machine to come up
	I1001 20:09:59.916786   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:09:59.917283   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | unable to find current IP address of domain kubernetes-upgrade-869396 in network mk-kubernetes-upgrade-869396
	I1001 20:09:59.917311   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | I1001 20:09:59.917255   61048 retry.go:31] will retry after 351.11198ms: waiting for machine to come up
	I1001 20:10:00.269770   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:00.270354   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | unable to find current IP address of domain kubernetes-upgrade-869396 in network mk-kubernetes-upgrade-869396
	I1001 20:10:00.270383   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | I1001 20:10:00.270315   61048 retry.go:31] will retry after 412.378102ms: waiting for machine to come up
	I1001 20:10:00.684169   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:00.684725   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | unable to find current IP address of domain kubernetes-upgrade-869396 in network mk-kubernetes-upgrade-869396
	I1001 20:10:00.684756   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | I1001 20:10:00.684673   61048 retry.go:31] will retry after 595.427098ms: waiting for machine to come up
	I1001 20:10:01.281612   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:01.282472   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | unable to find current IP address of domain kubernetes-upgrade-869396 in network mk-kubernetes-upgrade-869396
	I1001 20:10:01.282507   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | I1001 20:10:01.282410   61048 retry.go:31] will retry after 702.46279ms: waiting for machine to come up
	I1001 20:10:01.986287   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:01.986834   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | unable to find current IP address of domain kubernetes-upgrade-869396 in network mk-kubernetes-upgrade-869396
	I1001 20:10:01.986854   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | I1001 20:10:01.986797   61048 retry.go:31] will retry after 1.188763463s: waiting for machine to come up
	I1001 20:10:03.177439   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:03.177923   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | unable to find current IP address of domain kubernetes-upgrade-869396 in network mk-kubernetes-upgrade-869396
	I1001 20:10:03.177957   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | I1001 20:10:03.177896   61048 retry.go:31] will retry after 1.221821013s: waiting for machine to come up
	I1001 20:10:04.401752   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:04.402263   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | unable to find current IP address of domain kubernetes-upgrade-869396 in network mk-kubernetes-upgrade-869396
	I1001 20:10:04.402286   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | I1001 20:10:04.402229   61048 retry.go:31] will retry after 1.445066566s: waiting for machine to come up
	I1001 20:10:05.849982   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:05.850471   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | unable to find current IP address of domain kubernetes-upgrade-869396 in network mk-kubernetes-upgrade-869396
	I1001 20:10:05.850507   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | I1001 20:10:05.850416   61048 retry.go:31] will retry after 1.520723569s: waiting for machine to come up
	I1001 20:10:07.373575   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:07.374167   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | unable to find current IP address of domain kubernetes-upgrade-869396 in network mk-kubernetes-upgrade-869396
	I1001 20:10:07.374211   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | I1001 20:10:07.374132   61048 retry.go:31] will retry after 2.8171751s: waiting for machine to come up
	I1001 20:10:10.193966   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:10.194415   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | unable to find current IP address of domain kubernetes-upgrade-869396 in network mk-kubernetes-upgrade-869396
	I1001 20:10:10.194443   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | I1001 20:10:10.194378   61048 retry.go:31] will retry after 2.505432258s: waiting for machine to come up
	I1001 20:10:12.701961   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:12.702523   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | unable to find current IP address of domain kubernetes-upgrade-869396 in network mk-kubernetes-upgrade-869396
	I1001 20:10:12.702548   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | I1001 20:10:12.702420   61048 retry.go:31] will retry after 3.073483938s: waiting for machine to come up
	I1001 20:10:15.777302   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:15.777844   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | unable to find current IP address of domain kubernetes-upgrade-869396 in network mk-kubernetes-upgrade-869396
	I1001 20:10:15.777876   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | I1001 20:10:15.777797   61048 retry.go:31] will retry after 3.600366932s: waiting for machine to come up
	I1001 20:10:19.380946   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:19.381473   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Found IP for machine: 192.168.50.159
	I1001 20:10:19.381499   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Reserving static IP address...
	I1001 20:10:19.381515   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has current primary IP address 192.168.50.159 and MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:19.382031   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-869396", mac: "52:54:00:0b:26:ea", ip: "192.168.50.159"} in network mk-kubernetes-upgrade-869396
	I1001 20:10:19.458238   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Reserved static IP address: 192.168.50.159
	I1001 20:10:19.458269   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Waiting for SSH to be available...
	I1001 20:10:19.458279   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | Getting to WaitForSSH function...
	I1001 20:10:19.461033   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:19.461609   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:ea", ip: ""} in network mk-kubernetes-upgrade-869396: {Iface:virbr2 ExpiryTime:2024-10-01 21:10:12 +0000 UTC Type:0 Mac:52:54:00:0b:26:ea Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0b:26:ea}
	I1001 20:10:19.461642   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined IP address 192.168.50.159 and MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:19.461772   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | Using SSH client type: external
	I1001 20:10:19.461801   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/kubernetes-upgrade-869396/id_rsa (-rw-------)
	I1001 20:10:19.461837   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/kubernetes-upgrade-869396/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 20:10:19.461858   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | About to run SSH command:
	I1001 20:10:19.461883   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | exit 0
	I1001 20:10:19.584233   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | SSH cmd err, output: <nil>: 
	I1001 20:10:19.584514   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) KVM machine creation complete!
	I1001 20:10:19.584919   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetConfigRaw
	I1001 20:10:19.585519   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .DriverName
	I1001 20:10:19.585706   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .DriverName
	I1001 20:10:19.585904   60814 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 20:10:19.585918   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetState
	I1001 20:10:19.587225   60814 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 20:10:19.587238   60814 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 20:10:19.587243   60814 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 20:10:19.587248   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHHostname
	I1001 20:10:19.589362   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:19.589753   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:ea", ip: ""} in network mk-kubernetes-upgrade-869396: {Iface:virbr2 ExpiryTime:2024-10-01 21:10:12 +0000 UTC Type:0 Mac:52:54:00:0b:26:ea Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:kubernetes-upgrade-869396 Clientid:01:52:54:00:0b:26:ea}
	I1001 20:10:19.589790   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined IP address 192.168.50.159 and MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:19.589962   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHPort
	I1001 20:10:19.590172   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHKeyPath
	I1001 20:10:19.590317   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHKeyPath
	I1001 20:10:19.590467   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHUsername
	I1001 20:10:19.590596   60814 main.go:141] libmachine: Using SSH client type: native
	I1001 20:10:19.590803   60814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I1001 20:10:19.590813   60814 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 20:10:19.691506   60814 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:10:19.691570   60814 main.go:141] libmachine: Detecting the provisioner...
	I1001 20:10:19.691585   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHHostname
	I1001 20:10:19.694549   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:19.695035   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:ea", ip: ""} in network mk-kubernetes-upgrade-869396: {Iface:virbr2 ExpiryTime:2024-10-01 21:10:12 +0000 UTC Type:0 Mac:52:54:00:0b:26:ea Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:kubernetes-upgrade-869396 Clientid:01:52:54:00:0b:26:ea}
	I1001 20:10:19.695073   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined IP address 192.168.50.159 and MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:19.695162   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHPort
	I1001 20:10:19.695356   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHKeyPath
	I1001 20:10:19.695511   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHKeyPath
	I1001 20:10:19.695665   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHUsername
	I1001 20:10:19.695831   60814 main.go:141] libmachine: Using SSH client type: native
	I1001 20:10:19.696009   60814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I1001 20:10:19.696031   60814 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 20:10:19.796982   60814 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 20:10:19.797086   60814 main.go:141] libmachine: found compatible host: buildroot
	I1001 20:10:19.797100   60814 main.go:141] libmachine: Provisioning with buildroot...
	I1001 20:10:19.797112   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetMachineName
	I1001 20:10:19.797343   60814 buildroot.go:166] provisioning hostname "kubernetes-upgrade-869396"
	I1001 20:10:19.797374   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetMachineName
	I1001 20:10:19.797523   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHHostname
	I1001 20:10:19.800089   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:19.800536   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:ea", ip: ""} in network mk-kubernetes-upgrade-869396: {Iface:virbr2 ExpiryTime:2024-10-01 21:10:12 +0000 UTC Type:0 Mac:52:54:00:0b:26:ea Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:kubernetes-upgrade-869396 Clientid:01:52:54:00:0b:26:ea}
	I1001 20:10:19.800583   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined IP address 192.168.50.159 and MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:19.800755   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHPort
	I1001 20:10:19.800970   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHKeyPath
	I1001 20:10:19.801123   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHKeyPath
	I1001 20:10:19.801245   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHUsername
	I1001 20:10:19.801399   60814 main.go:141] libmachine: Using SSH client type: native
	I1001 20:10:19.801574   60814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I1001 20:10:19.801586   60814 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-869396 && echo "kubernetes-upgrade-869396" | sudo tee /etc/hostname
	I1001 20:10:19.919094   60814 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-869396
	
	I1001 20:10:19.919129   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHHostname
	I1001 20:10:19.921815   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:19.922159   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:ea", ip: ""} in network mk-kubernetes-upgrade-869396: {Iface:virbr2 ExpiryTime:2024-10-01 21:10:12 +0000 UTC Type:0 Mac:52:54:00:0b:26:ea Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:kubernetes-upgrade-869396 Clientid:01:52:54:00:0b:26:ea}
	I1001 20:10:19.922199   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined IP address 192.168.50.159 and MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:19.922356   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHPort
	I1001 20:10:19.922556   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHKeyPath
	I1001 20:10:19.922733   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHKeyPath
	I1001 20:10:19.922852   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHUsername
	I1001 20:10:19.923005   60814 main.go:141] libmachine: Using SSH client type: native
	I1001 20:10:19.923187   60814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I1001 20:10:19.923205   60814 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-869396' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-869396/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-869396' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 20:10:20.032876   60814 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:10:20.032929   60814 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 20:10:20.032952   60814 buildroot.go:174] setting up certificates
	I1001 20:10:20.032961   60814 provision.go:84] configureAuth start
	I1001 20:10:20.032972   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetMachineName
	I1001 20:10:20.033273   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetIP
	I1001 20:10:20.036117   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:20.036524   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:ea", ip: ""} in network mk-kubernetes-upgrade-869396: {Iface:virbr2 ExpiryTime:2024-10-01 21:10:12 +0000 UTC Type:0 Mac:52:54:00:0b:26:ea Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:kubernetes-upgrade-869396 Clientid:01:52:54:00:0b:26:ea}
	I1001 20:10:20.036553   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined IP address 192.168.50.159 and MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:20.036705   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHHostname
	I1001 20:10:20.038899   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:20.039256   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:ea", ip: ""} in network mk-kubernetes-upgrade-869396: {Iface:virbr2 ExpiryTime:2024-10-01 21:10:12 +0000 UTC Type:0 Mac:52:54:00:0b:26:ea Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:kubernetes-upgrade-869396 Clientid:01:52:54:00:0b:26:ea}
	I1001 20:10:20.039285   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined IP address 192.168.50.159 and MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:20.039372   60814 provision.go:143] copyHostCerts
	I1001 20:10:20.039432   60814 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 20:10:20.039441   60814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 20:10:20.039493   60814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 20:10:20.039639   60814 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 20:10:20.039649   60814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 20:10:20.039676   60814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 20:10:20.039741   60814 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 20:10:20.039748   60814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 20:10:20.039766   60814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 20:10:20.039824   60814 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-869396 san=[127.0.0.1 192.168.50.159 kubernetes-upgrade-869396 localhost minikube]
	I1001 20:10:20.147256   60814 provision.go:177] copyRemoteCerts
	I1001 20:10:20.147308   60814 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 20:10:20.147332   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHHostname
	I1001 20:10:20.150427   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:20.150800   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:ea", ip: ""} in network mk-kubernetes-upgrade-869396: {Iface:virbr2 ExpiryTime:2024-10-01 21:10:12 +0000 UTC Type:0 Mac:52:54:00:0b:26:ea Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:kubernetes-upgrade-869396 Clientid:01:52:54:00:0b:26:ea}
	I1001 20:10:20.150834   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined IP address 192.168.50.159 and MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:20.151114   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHPort
	I1001 20:10:20.151307   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHKeyPath
	I1001 20:10:20.151478   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHUsername
	I1001 20:10:20.151635   60814 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/kubernetes-upgrade-869396/id_rsa Username:docker}
	I1001 20:10:20.230389   60814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 20:10:20.254139   60814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1001 20:10:20.278055   60814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 20:10:20.301886   60814 provision.go:87] duration metric: took 268.911494ms to configureAuth
	I1001 20:10:20.301915   60814 buildroot.go:189] setting minikube options for container-runtime
	I1001 20:10:20.302076   60814 config.go:182] Loaded profile config "kubernetes-upgrade-869396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1001 20:10:20.302149   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHHostname
	I1001 20:10:20.304754   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:20.305136   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:ea", ip: ""} in network mk-kubernetes-upgrade-869396: {Iface:virbr2 ExpiryTime:2024-10-01 21:10:12 +0000 UTC Type:0 Mac:52:54:00:0b:26:ea Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:kubernetes-upgrade-869396 Clientid:01:52:54:00:0b:26:ea}
	I1001 20:10:20.305163   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined IP address 192.168.50.159 and MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:20.305431   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHPort
	I1001 20:10:20.305624   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHKeyPath
	I1001 20:10:20.305772   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHKeyPath
	I1001 20:10:20.305923   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHUsername
	I1001 20:10:20.306188   60814 main.go:141] libmachine: Using SSH client type: native
	I1001 20:10:20.306350   60814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I1001 20:10:20.306373   60814 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 20:10:20.534360   60814 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 20:10:20.534390   60814 main.go:141] libmachine: Checking connection to Docker...
	I1001 20:10:20.534402   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetURL
	I1001 20:10:20.536082   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | Using libvirt version 6000000
	I1001 20:10:20.538765   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:20.539141   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:ea", ip: ""} in network mk-kubernetes-upgrade-869396: {Iface:virbr2 ExpiryTime:2024-10-01 21:10:12 +0000 UTC Type:0 Mac:52:54:00:0b:26:ea Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:kubernetes-upgrade-869396 Clientid:01:52:54:00:0b:26:ea}
	I1001 20:10:20.539177   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined IP address 192.168.50.159 and MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:20.539352   60814 main.go:141] libmachine: Docker is up and running!
	I1001 20:10:20.539365   60814 main.go:141] libmachine: Reticulating splines...
	I1001 20:10:20.539371   60814 client.go:171] duration metric: took 23.009279692s to LocalClient.Create
	I1001 20:10:20.539392   60814 start.go:167] duration metric: took 23.009336138s to libmachine.API.Create "kubernetes-upgrade-869396"
	I1001 20:10:20.539402   60814 start.go:293] postStartSetup for "kubernetes-upgrade-869396" (driver="kvm2")
	I1001 20:10:20.539412   60814 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 20:10:20.539436   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .DriverName
	I1001 20:10:20.539652   60814 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 20:10:20.539681   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHHostname
	I1001 20:10:20.542085   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:20.542450   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:ea", ip: ""} in network mk-kubernetes-upgrade-869396: {Iface:virbr2 ExpiryTime:2024-10-01 21:10:12 +0000 UTC Type:0 Mac:52:54:00:0b:26:ea Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:kubernetes-upgrade-869396 Clientid:01:52:54:00:0b:26:ea}
	I1001 20:10:20.542475   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined IP address 192.168.50.159 and MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:20.542648   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHPort
	I1001 20:10:20.542818   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHKeyPath
	I1001 20:10:20.542988   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHUsername
	I1001 20:10:20.543111   60814 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/kubernetes-upgrade-869396/id_rsa Username:docker}
	I1001 20:10:20.623561   60814 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 20:10:20.627582   60814 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 20:10:20.627612   60814 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 20:10:20.627701   60814 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 20:10:20.627851   60814 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 20:10:20.627992   60814 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 20:10:20.639855   60814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:10:20.663610   60814 start.go:296] duration metric: took 124.191548ms for postStartSetup
	I1001 20:10:20.663705   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetConfigRaw
	I1001 20:10:20.664406   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetIP
	I1001 20:10:20.667390   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:20.667739   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:ea", ip: ""} in network mk-kubernetes-upgrade-869396: {Iface:virbr2 ExpiryTime:2024-10-01 21:10:12 +0000 UTC Type:0 Mac:52:54:00:0b:26:ea Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:kubernetes-upgrade-869396 Clientid:01:52:54:00:0b:26:ea}
	I1001 20:10:20.667766   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined IP address 192.168.50.159 and MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:20.668055   60814 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/config.json ...
	I1001 20:10:20.668262   60814 start.go:128] duration metric: took 23.158947392s to createHost
	I1001 20:10:20.668284   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHHostname
	I1001 20:10:20.670861   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:20.671220   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:ea", ip: ""} in network mk-kubernetes-upgrade-869396: {Iface:virbr2 ExpiryTime:2024-10-01 21:10:12 +0000 UTC Type:0 Mac:52:54:00:0b:26:ea Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:kubernetes-upgrade-869396 Clientid:01:52:54:00:0b:26:ea}
	I1001 20:10:20.671244   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined IP address 192.168.50.159 and MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:20.671446   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHPort
	I1001 20:10:20.671660   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHKeyPath
	I1001 20:10:20.671824   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHKeyPath
	I1001 20:10:20.671997   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHUsername
	I1001 20:10:20.672138   60814 main.go:141] libmachine: Using SSH client type: native
	I1001 20:10:20.672301   60814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I1001 20:10:20.672311   60814 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 20:10:20.773213   60814 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727813420.747724114
	
	I1001 20:10:20.773235   60814 fix.go:216] guest clock: 1727813420.747724114
	I1001 20:10:20.773242   60814 fix.go:229] Guest: 2024-10-01 20:10:20.747724114 +0000 UTC Remote: 2024-10-01 20:10:20.668273859 +0000 UTC m=+46.050259043 (delta=79.450255ms)
	I1001 20:10:20.773294   60814 fix.go:200] guest clock delta is within tolerance: 79.450255ms
	I1001 20:10:20.773302   60814 start.go:83] releasing machines lock for "kubernetes-upgrade-869396", held for 23.264158255s
	I1001 20:10:20.773335   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .DriverName
	I1001 20:10:20.773612   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetIP
	I1001 20:10:20.776637   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:20.777037   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:ea", ip: ""} in network mk-kubernetes-upgrade-869396: {Iface:virbr2 ExpiryTime:2024-10-01 21:10:12 +0000 UTC Type:0 Mac:52:54:00:0b:26:ea Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:kubernetes-upgrade-869396 Clientid:01:52:54:00:0b:26:ea}
	I1001 20:10:20.777073   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined IP address 192.168.50.159 and MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:20.777279   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .DriverName
	I1001 20:10:20.777831   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .DriverName
	I1001 20:10:20.778054   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .DriverName
	I1001 20:10:20.778140   60814 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 20:10:20.778198   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHHostname
	I1001 20:10:20.778237   60814 ssh_runner.go:195] Run: cat /version.json
	I1001 20:10:20.778277   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHHostname
	I1001 20:10:20.781168   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:20.781350   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:20.781452   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:ea", ip: ""} in network mk-kubernetes-upgrade-869396: {Iface:virbr2 ExpiryTime:2024-10-01 21:10:12 +0000 UTC Type:0 Mac:52:54:00:0b:26:ea Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:kubernetes-upgrade-869396 Clientid:01:52:54:00:0b:26:ea}
	I1001 20:10:20.781477   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined IP address 192.168.50.159 and MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:20.781651   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:ea", ip: ""} in network mk-kubernetes-upgrade-869396: {Iface:virbr2 ExpiryTime:2024-10-01 21:10:12 +0000 UTC Type:0 Mac:52:54:00:0b:26:ea Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:kubernetes-upgrade-869396 Clientid:01:52:54:00:0b:26:ea}
	I1001 20:10:20.781692   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHPort
	I1001 20:10:20.781705   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined IP address 192.168.50.159 and MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:20.781863   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHPort
	I1001 20:10:20.781939   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHKeyPath
	I1001 20:10:20.782021   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHKeyPath
	I1001 20:10:20.782102   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHUsername
	I1001 20:10:20.782150   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHUsername
	I1001 20:10:20.782230   60814 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/kubernetes-upgrade-869396/id_rsa Username:docker}
	I1001 20:10:20.782320   60814 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/kubernetes-upgrade-869396/id_rsa Username:docker}
	I1001 20:10:20.865697   60814 ssh_runner.go:195] Run: systemctl --version
	I1001 20:10:20.905897   60814 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 20:10:21.074245   60814 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 20:10:21.082964   60814 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 20:10:21.083039   60814 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 20:10:21.105136   60814 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 20:10:21.105160   60814 start.go:495] detecting cgroup driver to use...
	I1001 20:10:21.105224   60814 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 20:10:21.128141   60814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 20:10:21.144879   60814 docker.go:217] disabling cri-docker service (if available) ...
	I1001 20:10:21.144949   60814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 20:10:21.160129   60814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 20:10:21.174975   60814 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 20:10:21.297540   60814 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 20:10:21.461451   60814 docker.go:233] disabling docker service ...
	I1001 20:10:21.461524   60814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 20:10:21.478513   60814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 20:10:21.492371   60814 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 20:10:21.616988   60814 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 20:10:21.743203   60814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 20:10:21.757582   60814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 20:10:21.776917   60814 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1001 20:10:21.776990   60814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:10:21.790838   60814 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 20:10:21.790906   60814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:10:21.803088   60814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:10:21.813732   60814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:10:21.824755   60814 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 20:10:21.835796   60814 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 20:10:21.845187   60814 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 20:10:21.845261   60814 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 20:10:21.858454   60814 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 20:10:21.869421   60814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:10:22.000700   60814 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 20:10:22.092738   60814 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 20:10:22.092806   60814 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 20:10:22.098065   60814 start.go:563] Will wait 60s for crictl version
	I1001 20:10:22.098118   60814 ssh_runner.go:195] Run: which crictl
	I1001 20:10:22.103159   60814 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 20:10:22.156864   60814 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 20:10:22.156949   60814 ssh_runner.go:195] Run: crio --version
	I1001 20:10:22.185591   60814 ssh_runner.go:195] Run: crio --version
	I1001 20:10:22.215522   60814 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1001 20:10:22.216646   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetIP
	I1001 20:10:22.219626   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:22.220063   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:ea", ip: ""} in network mk-kubernetes-upgrade-869396: {Iface:virbr2 ExpiryTime:2024-10-01 21:10:12 +0000 UTC Type:0 Mac:52:54:00:0b:26:ea Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:kubernetes-upgrade-869396 Clientid:01:52:54:00:0b:26:ea}
	I1001 20:10:22.220095   60814 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined IP address 192.168.50.159 and MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:10:22.220281   60814 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1001 20:10:22.224323   60814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:10:22.237097   60814 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-869396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.20.0 ClusterName:kubernetes-upgrade-869396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 20:10:22.237209   60814 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1001 20:10:22.237265   60814 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:10:22.270133   60814 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1001 20:10:22.270194   60814 ssh_runner.go:195] Run: which lz4
	I1001 20:10:22.274267   60814 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 20:10:22.278828   60814 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 20:10:22.278875   60814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1001 20:10:23.948631   60814 crio.go:462] duration metric: took 1.674464429s to copy over tarball
	I1001 20:10:23.948724   60814 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 20:10:26.651732   60814 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.702978798s)
	I1001 20:10:26.651761   60814 crio.go:469] duration metric: took 2.703091972s to extract the tarball
	I1001 20:10:26.651770   60814 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 20:10:26.695940   60814 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:10:26.742151   60814 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1001 20:10:26.742177   60814 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1001 20:10:26.742244   60814 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:10:26.742284   60814 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1001 20:10:26.742309   60814 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1001 20:10:26.742437   60814 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1001 20:10:26.742373   60814 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1001 20:10:26.742381   60814 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1001 20:10:26.742403   60814 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1001 20:10:26.742408   60814 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 20:10:26.743554   60814 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1001 20:10:26.743786   60814 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1001 20:10:26.743818   60814 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1001 20:10:26.743822   60814 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1001 20:10:26.743796   60814 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1001 20:10:26.743889   60814 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 20:10:26.743867   60814 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1001 20:10:26.743818   60814 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:10:26.977877   60814 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 20:10:27.001349   60814 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1001 20:10:27.022808   60814 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1001 20:10:27.022863   60814 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 20:10:27.022913   60814 ssh_runner.go:195] Run: which crictl
	I1001 20:10:27.055241   60814 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1001 20:10:27.058199   60814 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1001 20:10:27.058231   60814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 20:10:27.058242   60814 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1001 20:10:27.058277   60814 ssh_runner.go:195] Run: which crictl
	I1001 20:10:27.077975   60814 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1001 20:10:27.100068   60814 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1001 20:10:27.100158   60814 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1001 20:10:27.106753   60814 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1001 20:10:27.121642   60814 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1001 20:10:27.121693   60814 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1001 20:10:27.121700   60814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 20:10:27.121722   60814 ssh_runner.go:195] Run: which crictl
	I1001 20:10:27.121779   60814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1001 20:10:27.189652   60814 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1001 20:10:27.189713   60814 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1001 20:10:27.189776   60814 ssh_runner.go:195] Run: which crictl
	I1001 20:10:27.259665   60814 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1001 20:10:27.259689   60814 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1001 20:10:27.259719   60814 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1001 20:10:27.259719   60814 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1001 20:10:27.259766   60814 ssh_runner.go:195] Run: which crictl
	I1001 20:10:27.259771   60814 ssh_runner.go:195] Run: which crictl
	I1001 20:10:27.263532   60814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1001 20:10:27.263546   60814 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1001 20:10:27.263556   60814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1001 20:10:27.263584   60814 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1001 20:10:27.263610   60814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 20:10:27.263620   60814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1001 20:10:27.263618   60814 ssh_runner.go:195] Run: which crictl
	I1001 20:10:27.265526   60814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1001 20:10:27.269340   60814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1001 20:10:27.392804   60814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1001 20:10:27.392819   60814 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1001 20:10:27.392893   60814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1001 20:10:27.392903   60814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1001 20:10:27.392969   60814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1001 20:10:27.393471   60814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1001 20:10:27.501584   60814 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1001 20:10:27.501692   60814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1001 20:10:27.501707   60814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1001 20:10:27.501732   60814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1001 20:10:27.501774   60814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1001 20:10:27.501837   60814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1001 20:10:27.606050   60814 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1001 20:10:27.606125   60814 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1001 20:10:27.606165   60814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1001 20:10:27.606201   60814 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1001 20:10:27.606230   60814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1001 20:10:27.651404   60814 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1001 20:10:27.658803   60814 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1001 20:10:28.043948   60814 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:10:28.186557   60814 cache_images.go:92] duration metric: took 1.444359698s to LoadCachedImages
	W1001 20:10:28.186691   60814 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1001 20:10:28.186711   60814 kubeadm.go:934] updating node { 192.168.50.159 8443 v1.20.0 crio true true} ...
	I1001 20:10:28.186829   60814 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-869396 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-869396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 20:10:28.186920   60814 ssh_runner.go:195] Run: crio config
	I1001 20:10:28.236805   60814 cni.go:84] Creating CNI manager for ""
	I1001 20:10:28.236834   60814 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:10:28.236843   60814 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 20:10:28.236861   60814 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.159 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-869396 NodeName:kubernetes-upgrade-869396 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1001 20:10:28.237006   60814 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-869396"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 20:10:28.237068   60814 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1001 20:10:28.248188   60814 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 20:10:28.248318   60814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 20:10:28.261642   60814 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I1001 20:10:28.280448   60814 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 20:10:28.299788   60814 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1001 20:10:28.321121   60814 ssh_runner.go:195] Run: grep 192.168.50.159	control-plane.minikube.internal$ /etc/hosts
	I1001 20:10:28.325209   60814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:10:28.338570   60814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:10:28.482641   60814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:10:28.503517   60814 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396 for IP: 192.168.50.159
	I1001 20:10:28.503541   60814 certs.go:194] generating shared ca certs ...
	I1001 20:10:28.503562   60814 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:10:28.503775   60814 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 20:10:28.503830   60814 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 20:10:28.503844   60814 certs.go:256] generating profile certs ...
	I1001 20:10:28.503920   60814 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/client.key
	I1001 20:10:28.503946   60814 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/client.crt with IP's: []
	I1001 20:10:28.663219   60814 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/client.crt ...
	I1001 20:10:28.663260   60814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/client.crt: {Name:mke9f4febbf04af02106037804a70ce3f22e6734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:10:28.663472   60814 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/client.key ...
	I1001 20:10:28.663491   60814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/client.key: {Name:mk466005de59d0c679d41bfdea896d1bc1f441c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:10:28.663581   60814 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/apiserver.key.a5b565d1
	I1001 20:10:28.663602   60814 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/apiserver.crt.a5b565d1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.159]
	I1001 20:10:28.764842   60814 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/apiserver.crt.a5b565d1 ...
	I1001 20:10:28.764883   60814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/apiserver.crt.a5b565d1: {Name:mkdf5006209fcdc0d83753fa6bd45da9f7c5a245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:10:28.765122   60814 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/apiserver.key.a5b565d1 ...
	I1001 20:10:28.765147   60814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/apiserver.key.a5b565d1: {Name:mk88f67d354fe900b7a095a47205dfc757408eaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:10:28.765261   60814 certs.go:381] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/apiserver.crt.a5b565d1 -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/apiserver.crt
	I1001 20:10:28.765368   60814 certs.go:385] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/apiserver.key.a5b565d1 -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/apiserver.key
	I1001 20:10:28.765446   60814 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/proxy-client.key
	I1001 20:10:28.765468   60814 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/proxy-client.crt with IP's: []
	I1001 20:10:28.861119   60814 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/proxy-client.crt ...
	I1001 20:10:28.861159   60814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/proxy-client.crt: {Name:mka476e6b2c62dea2c33222b82c3f2fb33a00c8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:10:28.861377   60814 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/proxy-client.key ...
	I1001 20:10:28.861398   60814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/proxy-client.key: {Name:mkd3fa4c1f88187cacd923061e20ae2a8cc61848 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:10:28.861582   60814 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 20:10:28.861622   60814 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 20:10:28.861630   60814 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 20:10:28.861653   60814 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 20:10:28.861673   60814 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 20:10:28.861693   60814 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 20:10:28.861734   60814 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:10:28.862290   60814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 20:10:28.892326   60814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 20:10:28.921776   60814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 20:10:28.949379   60814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 20:10:28.986164   60814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1001 20:10:29.018616   60814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 20:10:29.049825   60814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 20:10:29.079972   60814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1001 20:10:29.109675   60814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 20:10:29.140777   60814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 20:10:29.172678   60814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 20:10:29.199595   60814 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 20:10:29.218262   60814 ssh_runner.go:195] Run: openssl version
	I1001 20:10:29.224568   60814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 20:10:29.239510   60814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 20:10:29.246361   60814 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 20:10:29.246486   60814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 20:10:29.254415   60814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 20:10:29.275121   60814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 20:10:29.291686   60814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:10:29.296748   60814 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:10:29.296841   60814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:10:29.304053   60814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 20:10:29.322059   60814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 20:10:29.344637   60814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 20:10:29.351362   60814 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 20:10:29.351436   60814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 20:10:29.365635   60814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 20:10:29.381115   60814 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 20:10:29.389681   60814 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 20:10:29.389751   60814 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-869396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.20.0 ClusterName:kubernetes-upgrade-869396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:10:29.389846   60814 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 20:10:29.389901   60814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 20:10:29.462960   60814 cri.go:89] found id: ""
	I1001 20:10:29.463091   60814 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 20:10:29.475838   60814 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:10:29.486815   60814 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:10:29.499790   60814 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:10:29.499811   60814 kubeadm.go:157] found existing configuration files:
	
	I1001 20:10:29.499866   60814 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:10:29.511423   60814 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:10:29.511500   60814 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:10:29.524555   60814 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:10:29.536955   60814 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:10:29.537034   60814 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:10:29.550242   60814 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:10:29.562877   60814 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:10:29.562947   60814 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:10:29.574265   60814 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:10:29.584492   60814 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:10:29.584577   60814 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:10:29.596384   60814 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:10:29.730895   60814 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1001 20:10:29.730961   60814 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:10:29.888802   60814 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:10:29.888943   60814 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:10:29.889060   60814 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 20:10:30.119538   60814 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:10:30.231596   60814 out.go:235]   - Generating certificates and keys ...
	I1001 20:10:30.231735   60814 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:10:30.231829   60814 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:10:30.459449   60814 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 20:10:30.570657   60814 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 20:10:31.110295   60814 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 20:10:31.271416   60814 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 20:10:31.431762   60814 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 20:10:31.431970   60814 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-869396 localhost] and IPs [192.168.50.159 127.0.0.1 ::1]
	I1001 20:10:31.537020   60814 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 20:10:31.537238   60814 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-869396 localhost] and IPs [192.168.50.159 127.0.0.1 ::1]
	I1001 20:10:31.666549   60814 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 20:10:32.130613   60814 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 20:10:32.303782   60814 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 20:10:32.304033   60814 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:10:32.489746   60814 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:10:32.665962   60814 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:10:33.048372   60814 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:10:33.150735   60814 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:10:33.211173   60814 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:10:33.212218   60814 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:10:33.212284   60814 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:10:33.342676   60814 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:10:33.388107   60814 out.go:235]   - Booting up control plane ...
	I1001 20:10:33.388246   60814 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:10:33.388394   60814 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:10:33.388519   60814 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:10:33.388678   60814 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:10:33.388900   60814 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 20:11:13.377683   60814 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1001 20:11:13.377809   60814 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:11:13.378095   60814 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:11:18.378575   60814 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:11:18.378816   60814 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:11:28.378167   60814 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:11:28.378385   60814 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:11:48.378006   60814 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:11:48.378311   60814 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:12:28.379264   60814 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:12:28.379511   60814 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:12:28.379530   60814 kubeadm.go:310] 
	I1001 20:12:28.379586   60814 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1001 20:12:28.379649   60814 kubeadm.go:310] 		timed out waiting for the condition
	I1001 20:12:28.379661   60814 kubeadm.go:310] 
	I1001 20:12:28.379703   60814 kubeadm.go:310] 	This error is likely caused by:
	I1001 20:12:28.379755   60814 kubeadm.go:310] 		- The kubelet is not running
	I1001 20:12:28.379920   60814 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1001 20:12:28.379939   60814 kubeadm.go:310] 
	I1001 20:12:28.380078   60814 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1001 20:12:28.380117   60814 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1001 20:12:28.380162   60814 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1001 20:12:28.380175   60814 kubeadm.go:310] 
	I1001 20:12:28.380344   60814 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1001 20:12:28.380497   60814 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1001 20:12:28.380521   60814 kubeadm.go:310] 
	I1001 20:12:28.380670   60814 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1001 20:12:28.380797   60814 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1001 20:12:28.380899   60814 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1001 20:12:28.381008   60814 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1001 20:12:28.381028   60814 kubeadm.go:310] 
	I1001 20:12:28.382334   60814 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:12:28.382423   60814 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1001 20:12:28.382504   60814 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1001 20:12:28.382622   60814 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-869396 localhost] and IPs [192.168.50.159 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-869396 localhost] and IPs [192.168.50.159 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-869396 localhost] and IPs [192.168.50.159 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-869396 localhost] and IPs [192.168.50.159 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1001 20:12:28.382669   60814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 20:12:28.837035   60814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:12:28.853092   60814 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:12:28.866286   60814 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:12:28.866310   60814 kubeadm.go:157] found existing configuration files:
	
	I1001 20:12:28.866373   60814 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:12:28.876750   60814 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:12:28.876846   60814 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:12:28.890684   60814 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:12:28.901752   60814 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:12:28.901823   60814 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:12:28.912608   60814 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:12:28.923312   60814 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:12:28.923383   60814 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:12:28.935740   60814 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:12:28.947613   60814 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:12:28.947691   60814 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:12:28.959965   60814 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:12:29.049632   60814 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1001 20:12:29.049812   60814 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:12:29.229434   60814 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:12:29.229577   60814 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:12:29.229687   60814 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 20:12:29.453121   60814 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:12:29.455209   60814 out.go:235]   - Generating certificates and keys ...
	I1001 20:12:29.455303   60814 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:12:29.455401   60814 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:12:29.455526   60814 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 20:12:29.455618   60814 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 20:12:29.455705   60814 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 20:12:29.455781   60814 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 20:12:29.455880   60814 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 20:12:29.455962   60814 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 20:12:29.456067   60814 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 20:12:29.456215   60814 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 20:12:29.456296   60814 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 20:12:29.456395   60814 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:12:29.511835   60814 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:12:29.626514   60814 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:12:29.759518   60814 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:12:30.106730   60814 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:12:30.131715   60814 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:12:30.133441   60814 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:12:30.133508   60814 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:12:30.289981   60814 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:12:30.291420   60814 out.go:235]   - Booting up control plane ...
	I1001 20:12:30.291537   60814 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:12:30.303866   60814 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:12:30.307038   60814 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:12:30.307152   60814 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:12:30.308738   60814 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 20:13:10.311794   60814 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1001 20:13:10.311978   60814 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:13:10.312239   60814 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:13:15.312698   60814 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:13:15.313037   60814 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:13:25.313547   60814 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:13:25.313802   60814 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:13:45.313093   60814 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:13:45.313318   60814 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:14:25.313387   60814 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:14:25.313689   60814 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:14:25.313703   60814 kubeadm.go:310] 
	I1001 20:14:25.313750   60814 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1001 20:14:25.313793   60814 kubeadm.go:310] 		timed out waiting for the condition
	I1001 20:14:25.313801   60814 kubeadm.go:310] 
	I1001 20:14:25.313849   60814 kubeadm.go:310] 	This error is likely caused by:
	I1001 20:14:25.313885   60814 kubeadm.go:310] 		- The kubelet is not running
	I1001 20:14:25.314040   60814 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1001 20:14:25.314071   60814 kubeadm.go:310] 
	I1001 20:14:25.314262   60814 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1001 20:14:25.314312   60814 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1001 20:14:25.314360   60814 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1001 20:14:25.314371   60814 kubeadm.go:310] 
	I1001 20:14:25.314510   60814 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1001 20:14:25.314631   60814 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1001 20:14:25.314643   60814 kubeadm.go:310] 
	I1001 20:14:25.314807   60814 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1001 20:14:25.314925   60814 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1001 20:14:25.315031   60814 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1001 20:14:25.315122   60814 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1001 20:14:25.315134   60814 kubeadm.go:310] 
	I1001 20:14:25.315501   60814 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:14:25.315629   60814 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1001 20:14:25.315729   60814 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1001 20:14:25.315819   60814 kubeadm.go:394] duration metric: took 3m55.926074837s to StartCluster
	I1001 20:14:25.315889   60814 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:14:25.315974   60814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:14:25.356949   60814 cri.go:89] found id: ""
	I1001 20:14:25.356978   60814 logs.go:276] 0 containers: []
	W1001 20:14:25.356986   60814 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:14:25.356993   60814 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:14:25.357040   60814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:14:25.390411   60814 cri.go:89] found id: ""
	I1001 20:14:25.390442   60814 logs.go:276] 0 containers: []
	W1001 20:14:25.390453   60814 logs.go:278] No container was found matching "etcd"
	I1001 20:14:25.390459   60814 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:14:25.390512   60814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:14:25.424232   60814 cri.go:89] found id: ""
	I1001 20:14:25.424262   60814 logs.go:276] 0 containers: []
	W1001 20:14:25.424270   60814 logs.go:278] No container was found matching "coredns"
	I1001 20:14:25.424276   60814 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:14:25.424324   60814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:14:25.471142   60814 cri.go:89] found id: ""
	I1001 20:14:25.471178   60814 logs.go:276] 0 containers: []
	W1001 20:14:25.471193   60814 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:14:25.471200   60814 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:14:25.471263   60814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:14:25.510571   60814 cri.go:89] found id: ""
	I1001 20:14:25.510597   60814 logs.go:276] 0 containers: []
	W1001 20:14:25.510605   60814 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:14:25.510611   60814 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:14:25.510666   60814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:14:25.550017   60814 cri.go:89] found id: ""
	I1001 20:14:25.550043   60814 logs.go:276] 0 containers: []
	W1001 20:14:25.550051   60814 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:14:25.550057   60814 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:14:25.550110   60814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:14:25.583356   60814 cri.go:89] found id: ""
	I1001 20:14:25.583393   60814 logs.go:276] 0 containers: []
	W1001 20:14:25.583405   60814 logs.go:278] No container was found matching "kindnet"
	I1001 20:14:25.583425   60814 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:14:25.583440   60814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:14:25.687341   60814 logs.go:123] Gathering logs for container status ...
	I1001 20:14:25.687384   60814 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:14:25.724258   60814 logs.go:123] Gathering logs for kubelet ...
	I1001 20:14:25.724288   60814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:14:25.771742   60814 logs.go:123] Gathering logs for dmesg ...
	I1001 20:14:25.771776   60814 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:14:25.784495   60814 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:14:25.784525   60814 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:14:25.895304   60814 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1001 20:14:25.895331   60814 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1001 20:14:25.895399   60814 out.go:270] * 
	* 
	W1001 20:14:25.895522   60814 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1001 20:14:25.895539   60814 out.go:270] * 
	* 
	W1001 20:14:25.896395   60814 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 20:14:25.899947   60814 out.go:201] 
	W1001 20:14:25.900963   60814 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1001 20:14:25.901012   60814 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1001 20:14:25.901050   60814 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1001 20:14:25.902399   60814 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-869396 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-869396
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-869396: (6.301906686s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-869396 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-869396 status --format={{.Host}}: exit status 7 (64.382006ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-869396 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-869396 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.426503153s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-869396 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-869396 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-869396 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (83.601582ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-869396] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-869396
	    minikube start -p kubernetes-upgrade-869396 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8693962 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-869396 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-869396 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-869396 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (58.377692146s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-10-01 20:16:15.287750467 +0000 UTC m=+4926.236553866
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-869396 -n kubernetes-upgrade-869396
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-869396 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-869396 logs -n 25: (1.582509683s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396 | jenkins | v1.34.0 | 01 Oct 24 20:09 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-042095                              | minikube                  | jenkins | v1.26.0 | 01 Oct 24 20:09 UTC | 01 Oct 24 20:11 UTC |
	|         | --memory=2200 --vm-driver=kvm2                         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | cert-options-432128 ssh                                | cert-options-432128       | jenkins | v1.34.0 | 01 Oct 24 20:10 UTC | 01 Oct 24 20:10 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-432128 -- sudo                         | cert-options-432128       | jenkins | v1.34.0 | 01 Oct 24 20:10 UTC | 01 Oct 24 20:10 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-432128                                 | cert-options-432128       | jenkins | v1.34.0 | 01 Oct 24 20:10 UTC | 01 Oct 24 20:10 UTC |
	| start   | -p old-k8s-version-359369                              | old-k8s-version-359369    | jenkins | v1.34.0 | 01 Oct 24 20:10 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-042095 stop                            | minikube                  | jenkins | v1.26.0 | 01 Oct 24 20:11 UTC | 01 Oct 24 20:11 UTC |
	| start   | -p stopped-upgrade-042095                              | stopped-upgrade-042095    | jenkins | v1.34.0 | 01 Oct 24 20:11 UTC | 01 Oct 24 20:11 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-042095                              | stopped-upgrade-042095    | jenkins | v1.34.0 | 01 Oct 24 20:11 UTC | 01 Oct 24 20:11 UTC |
	| start   | -p no-preload-262337                                   | no-preload-262337         | jenkins | v1.34.0 | 01 Oct 24 20:11 UTC | 01 Oct 24 20:13 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-402897                              | cert-expiration-402897    | jenkins | v1.34.0 | 01 Oct 24 20:12 UTC | 01 Oct 24 20:12 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-402897                              | cert-expiration-402897    | jenkins | v1.34.0 | 01 Oct 24 20:12 UTC | 01 Oct 24 20:12 UTC |
	| start   | -p embed-certs-106982                                  | embed-certs-106982        | jenkins | v1.34.0 | 01 Oct 24 20:12 UTC | 01 Oct 24 20:13 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-262337             | no-preload-262337         | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC | 01 Oct 24 20:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-262337                                   | no-preload-262337         | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-106982            | embed-certs-106982        | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC | 01 Oct 24 20:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p embed-certs-106982                                  | embed-certs-106982        | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396 | jenkins | v1.34.0 | 01 Oct 24 20:14 UTC | 01 Oct 24 20:14 UTC |
	| start   | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396 | jenkins | v1.34.0 | 01 Oct 24 20:14 UTC | 01 Oct 24 20:15 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396 | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396 | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC | 01 Oct 24 20:16 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-359369        | old-k8s-version-359369    | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-262337                  | no-preload-262337         | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-262337                                   | no-preload-262337         | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-106982                 | embed-certs-106982        | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 20:15:51
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 20:15:51.420583   64676 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:15:51.420707   64676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:15:51.420717   64676 out.go:358] Setting ErrFile to fd 2...
	I1001 20:15:51.420722   64676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:15:51.420891   64676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 20:15:51.421455   64676 out.go:352] Setting JSON to false
	I1001 20:15:51.422386   64676 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7093,"bootTime":1727806658,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 20:15:51.422479   64676 start.go:139] virtualization: kvm guest
	I1001 20:15:51.424589   64676 out.go:177] * [no-preload-262337] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 20:15:51.425724   64676 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 20:15:51.425731   64676 notify.go:220] Checking for updates...
	I1001 20:15:51.427934   64676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 20:15:51.428909   64676 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:15:51.429935   64676 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:15:51.430961   64676 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 20:15:51.432167   64676 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 20:15:51.433716   64676 config.go:182] Loaded profile config "no-preload-262337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:15:51.434130   64676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:15:51.434179   64676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:15:51.449323   64676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44363
	I1001 20:15:51.449882   64676 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:15:51.450488   64676 main.go:141] libmachine: Using API Version  1
	I1001 20:15:51.450513   64676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:15:51.450902   64676 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:15:51.451119   64676 main.go:141] libmachine: (no-preload-262337) Calling .DriverName
	I1001 20:15:51.451369   64676 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 20:15:51.451710   64676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:15:51.451749   64676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:15:51.466305   64676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34599
	I1001 20:15:51.466838   64676 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:15:51.467511   64676 main.go:141] libmachine: Using API Version  1
	I1001 20:15:51.467551   64676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:15:51.467937   64676 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:15:51.468162   64676 main.go:141] libmachine: (no-preload-262337) Calling .DriverName
	I1001 20:15:51.501681   64676 out.go:177] * Using the kvm2 driver based on existing profile
	I1001 20:15:51.502665   64676 start.go:297] selected driver: kvm2
	I1001 20:15:51.502678   64676 start.go:901] validating driver "kvm2" against &{Name:no-preload-262337 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:no-preload-262337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:15:51.502801   64676 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 20:15:51.503434   64676 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:15:51.503524   64676 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 20:15:51.518504   64676 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 20:15:51.518904   64676 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:15:51.518943   64676 cni.go:84] Creating CNI manager for ""
	I1001 20:15:51.518993   64676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:15:51.519041   64676 start.go:340] cluster config:
	{Name:no-preload-262337 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-262337 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:15:51.519165   64676 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:15:51.520987   64676 out.go:177] * Starting "no-preload-262337" primary control-plane node in "no-preload-262337" cluster
	I1001 20:15:51.522206   64676 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 20:15:51.522323   64676 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/no-preload-262337/config.json ...
	I1001 20:15:51.522419   64676 cache.go:107] acquiring lock: {Name:mk9133b9689f4b1221c6eab381d72914252ee301 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:15:51.522454   64676 cache.go:107] acquiring lock: {Name:mka40978928887ac0ec18aa0ded29b1b95e664a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:15:51.522454   64676 cache.go:107] acquiring lock: {Name:mkf46e773230b7448ebbc9c209f19036d080ae36 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:15:51.522419   64676 cache.go:107] acquiring lock: {Name:mkdea4c359a277d6c391ec03b4d9307a079dcc5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:15:51.522523   64676 cache.go:107] acquiring lock: {Name:mk4d5d445c40c18c5f2edff17bf92af9ad54a9e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:15:51.522569   64676 cache.go:115] /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1001 20:15:51.522585   64676 cache.go:115] /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1001 20:15:51.522579   64676 start.go:360] acquireMachinesLock for no-preload-262337: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 20:15:51.522593   64676 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 181.335µs
	I1001 20:15:51.522603   64676 cache.go:115] /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I1001 20:15:51.522602   64676 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1" took 81.221µs
	I1001 20:15:51.522615   64676 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1001 20:15:51.522615   64676 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 168.714µs
	I1001 20:15:51.522616   64676 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1001 20:15:51.522624   64676 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1001 20:15:51.522525   64676 cache.go:107] acquiring lock: {Name:mke3dda9ed2656e6919d8575ea7d57d2611a4a94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:15:51.522627   64676 cache.go:107] acquiring lock: {Name:mk81686cffac0dca60b4e200d875afe79b5a9d71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:15:51.522595   64676 cache.go:107] acquiring lock: {Name:mk2a45d769cdb2552e0a8bfc252676539df4755b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:15:51.522701   64676 cache.go:115] /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1001 20:15:51.522716   64676 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1" took 263.406µs
	I1001 20:15:51.522725   64676 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1001 20:15:51.522743   64676 cache.go:115] /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I1001 20:15:51.522744   64676 cache.go:115] /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1001 20:15:51.522750   64676 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 229.803µs
	I1001 20:15:51.522746   64676 start.go:364] duration metric: took 136.736µs to acquireMachinesLock for "no-preload-262337"
	I1001 20:15:51.522763   64676 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1" took 346.888µs
	I1001 20:15:51.522755   64676 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I1001 20:15:51.522775   64676 start.go:96] Skipping create...Using existing machine configuration
	I1001 20:15:51.522744   64676 cache.go:115] /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1001 20:15:51.522784   64676 fix.go:54] fixHost starting: 
	I1001 20:15:51.522788   64676 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1" took 266.5µs
	I1001 20:15:51.522797   64676 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1001 20:15:51.522795   64676 cache.go:115] /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1001 20:15:51.522775   64676 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1001 20:15:51.522816   64676 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 288.308µs
	I1001 20:15:51.522841   64676 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1001 20:15:51.522848   64676 cache.go:87] Successfully saved all images to host disk.
	I1001 20:15:51.523087   64676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:15:51.523118   64676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:15:51.537959   64676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37647
	I1001 20:15:51.538388   64676 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:15:51.538903   64676 main.go:141] libmachine: Using API Version  1
	I1001 20:15:51.538922   64676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:15:51.539184   64676 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:15:51.539339   64676 main.go:141] libmachine: (no-preload-262337) Calling .DriverName
	I1001 20:15:51.539475   64676 main.go:141] libmachine: (no-preload-262337) Calling .GetState
	I1001 20:15:51.540904   64676 fix.go:112] recreateIfNeeded on no-preload-262337: state=Running err=<nil>
	W1001 20:15:51.540934   64676 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 20:15:51.542569   64676 out.go:177] * Updating the running kvm2 "no-preload-262337" VM ...
	I1001 20:15:48.324098   64288 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 d31d03f256e2c54d5b692c5193b717e8dc4bad021b1235776e66e0385fce0cfb 4821392c6c84de53d6e19912721fbcbbd528e7d3e6bc3fbc44671127d76b48d2 55e672ef7211761f50e1a6a7c3d99c934286c3c4b9266e3e18297cfa20f96263 3ffa66ab94f0e68e781898ef568cd97353c408f8f6df4608f625eded83c3dc57 7b4f5c956a447ed2ca8be8f604f4f98d3ef05e190b6b82d58a377fb76c2de98b c88306b7aaad8c10b81810a08c4cb346304ae878e859d3bbaaa40b7bcfcc47f1 5aff26be3c677550a192cb07ee7ac95abee3240817efccb0154a5d6ed85b06dc b00af3fcecb8aa1dc1a66947db4b8e451615b86befdedb2df8ae7e9ea12e36fc 6408089959d5200b4855d4c4a9b6ee695f3b82ae4747e92da8584c09087a9be3 e136e432b7fbdb53c650a67e38112eb835a3f5233805ea7e5d2635a968d18fc5: (15.107411012s)
	W1001 20:15:48.324176   64288 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 d31d03f256e2c54d5b692c5193b717e8dc4bad021b1235776e66e0385fce0cfb 4821392c6c84de53d6e19912721fbcbbd528e7d3e6bc3fbc44671127d76b48d2 55e672ef7211761f50e1a6a7c3d99c934286c3c4b9266e3e18297cfa20f96263 3ffa66ab94f0e68e781898ef568cd97353c408f8f6df4608f625eded83c3dc57 7b4f5c956a447ed2ca8be8f604f4f98d3ef05e190b6b82d58a377fb76c2de98b c88306b7aaad8c10b81810a08c4cb346304ae878e859d3bbaaa40b7bcfcc47f1 5aff26be3c677550a192cb07ee7ac95abee3240817efccb0154a5d6ed85b06dc b00af3fcecb8aa1dc1a66947db4b8e451615b86befdedb2df8ae7e9ea12e36fc 6408089959d5200b4855d4c4a9b6ee695f3b82ae4747e92da8584c09087a9be3 e136e432b7fbdb53c650a67e38112eb835a3f5233805ea7e5d2635a968d18fc5: Process exited with status 1
	stdout:
	d31d03f256e2c54d5b692c5193b717e8dc4bad021b1235776e66e0385fce0cfb
	4821392c6c84de53d6e19912721fbcbbd528e7d3e6bc3fbc44671127d76b48d2
	55e672ef7211761f50e1a6a7c3d99c934286c3c4b9266e3e18297cfa20f96263
	3ffa66ab94f0e68e781898ef568cd97353c408f8f6df4608f625eded83c3dc57
	7b4f5c956a447ed2ca8be8f604f4f98d3ef05e190b6b82d58a377fb76c2de98b
	c88306b7aaad8c10b81810a08c4cb346304ae878e859d3bbaaa40b7bcfcc47f1
	
	stderr:
	E1001 20:15:48.307188    3558 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aff26be3c677550a192cb07ee7ac95abee3240817efccb0154a5d6ed85b06dc\": container with ID starting with 5aff26be3c677550a192cb07ee7ac95abee3240817efccb0154a5d6ed85b06dc not found: ID does not exist" containerID="5aff26be3c677550a192cb07ee7ac95abee3240817efccb0154a5d6ed85b06dc"
	time="2024-10-01T20:15:48Z" level=fatal msg="stopping the container \"5aff26be3c677550a192cb07ee7ac95abee3240817efccb0154a5d6ed85b06dc\": rpc error: code = NotFound desc = could not find container \"5aff26be3c677550a192cb07ee7ac95abee3240817efccb0154a5d6ed85b06dc\": container with ID starting with 5aff26be3c677550a192cb07ee7ac95abee3240817efccb0154a5d6ed85b06dc not found: ID does not exist"
	I1001 20:15:48.324250   64288 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1001 20:15:48.371913   64288 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:15:48.383340   64288 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Oct  1 20:14 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5646 Oct  1 20:14 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5755 Oct  1 20:14 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Oct  1 20:14 /etc/kubernetes/scheduler.conf
	
	I1001 20:15:48.383396   64288 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:15:48.393570   64288 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:15:48.403590   64288 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:15:48.412538   64288 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1001 20:15:48.412597   64288 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:15:48.423174   64288 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:15:48.432274   64288 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1001 20:15:48.432345   64288 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:15:48.441577   64288 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:15:48.451013   64288 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:15:48.513875   64288 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:15:49.480940   64288 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:15:49.699294   64288 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:15:49.762260   64288 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:15:49.835351   64288 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:15:49.835468   64288 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:15:49.863544   64288 api_server.go:72] duration metric: took 28.192808ms to wait for apiserver process to appear ...
	I1001 20:15:49.863582   64288 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:15:49.863604   64288 api_server.go:253] Checking apiserver healthz at https://192.168.50.159:8443/healthz ...
	I1001 20:15:51.543651   64676 machine.go:93] provisionDockerMachine start ...
	I1001 20:15:51.543669   64676 main.go:141] libmachine: (no-preload-262337) Calling .DriverName
	I1001 20:15:51.543841   64676 main.go:141] libmachine: (no-preload-262337) Calling .GetSSHHostname
	I1001 20:15:51.546512   64676 main.go:141] libmachine: (no-preload-262337) DBG | domain no-preload-262337 has defined MAC address 52:54:00:8e:b1:d4 in network mk-no-preload-262337
	I1001 20:15:51.546998   64676 main.go:141] libmachine: (no-preload-262337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:b1:d4", ip: ""} in network mk-no-preload-262337: {Iface:virbr3 ExpiryTime:2024-10-01 21:12:12 +0000 UTC Type:0 Mac:52:54:00:8e:b1:d4 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-262337 Clientid:01:52:54:00:8e:b1:d4}
	I1001 20:15:51.547021   64676 main.go:141] libmachine: (no-preload-262337) DBG | domain no-preload-262337 has defined IP address 192.168.61.93 and MAC address 52:54:00:8e:b1:d4 in network mk-no-preload-262337
	I1001 20:15:51.547156   64676 main.go:141] libmachine: (no-preload-262337) Calling .GetSSHPort
	I1001 20:15:51.547317   64676 main.go:141] libmachine: (no-preload-262337) Calling .GetSSHKeyPath
	I1001 20:15:51.547444   64676 main.go:141] libmachine: (no-preload-262337) Calling .GetSSHKeyPath
	I1001 20:15:51.547567   64676 main.go:141] libmachine: (no-preload-262337) Calling .GetSSHUsername
	I1001 20:15:51.547701   64676 main.go:141] libmachine: Using SSH client type: native
	I1001 20:15:51.547915   64676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I1001 20:15:51.547929   64676 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 20:15:54.452710   64676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.93:22: connect: no route to host
	I1001 20:15:54.864070   64288 api_server.go:269] stopped: https://192.168.50.159:8443/healthz: Get "https://192.168.50.159:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 20:15:54.864139   64288 api_server.go:253] Checking apiserver healthz at https://192.168.50.159:8443/healthz ...
	I1001 20:15:57.524713   64676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.93:22: connect: no route to host
	I1001 20:15:59.864966   64288 api_server.go:269] stopped: https://192.168.50.159:8443/healthz: Get "https://192.168.50.159:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 20:15:59.865007   64288 api_server.go:253] Checking apiserver healthz at https://192.168.50.159:8443/healthz ...
	I1001 20:16:03.604651   64676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.93:22: connect: no route to host
	I1001 20:16:04.865634   64288 api_server.go:269] stopped: https://192.168.50.159:8443/healthz: Get "https://192.168.50.159:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 20:16:04.865698   64288 api_server.go:253] Checking apiserver healthz at https://192.168.50.159:8443/healthz ...
	I1001 20:16:06.676622   64676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.93:22: connect: no route to host
	I1001 20:16:08.581548   64288 api_server.go:269] stopped: https://192.168.50.159:8443/healthz: Get "https://192.168.50.159:8443/healthz": read tcp 192.168.50.1:47170->192.168.50.159:8443: read: connection reset by peer
	I1001 20:16:08.581602   64288 api_server.go:253] Checking apiserver healthz at https://192.168.50.159:8443/healthz ...
	I1001 20:16:08.582068   64288 api_server.go:269] stopped: https://192.168.50.159:8443/healthz: Get "https://192.168.50.159:8443/healthz": dial tcp 192.168.50.159:8443: connect: connection refused
	I1001 20:16:08.864561   64288 api_server.go:253] Checking apiserver healthz at https://192.168.50.159:8443/healthz ...
	I1001 20:16:08.865276   64288 api_server.go:269] stopped: https://192.168.50.159:8443/healthz: Get "https://192.168.50.159:8443/healthz": dial tcp 192.168.50.159:8443: connect: connection refused
	I1001 20:16:09.363837   64288 api_server.go:253] Checking apiserver healthz at https://192.168.50.159:8443/healthz ...
	I1001 20:16:09.364500   64288 api_server.go:269] stopped: https://192.168.50.159:8443/healthz: Get "https://192.168.50.159:8443/healthz": dial tcp 192.168.50.159:8443: connect: connection refused
	I1001 20:16:09.864074   64288 api_server.go:253] Checking apiserver healthz at https://192.168.50.159:8443/healthz ...
	I1001 20:16:09.864749   64288 api_server.go:269] stopped: https://192.168.50.159:8443/healthz: Get "https://192.168.50.159:8443/healthz": dial tcp 192.168.50.159:8443: connect: connection refused
	I1001 20:16:10.364427   64288 api_server.go:253] Checking apiserver healthz at https://192.168.50.159:8443/healthz ...
	I1001 20:16:10.365128   64288 api_server.go:269] stopped: https://192.168.50.159:8443/healthz: Get "https://192.168.50.159:8443/healthz": dial tcp 192.168.50.159:8443: connect: connection refused
	I1001 20:16:10.863688   64288 api_server.go:253] Checking apiserver healthz at https://192.168.50.159:8443/healthz ...
	I1001 20:16:12.432715   64288 api_server.go:279] https://192.168.50.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1001 20:16:12.432746   64288 api_server.go:103] status: https://192.168.50.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1001 20:16:12.432761   64288 api_server.go:253] Checking apiserver healthz at https://192.168.50.159:8443/healthz ...
	I1001 20:16:12.520747   64288 api_server.go:279] https://192.168.50.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 20:16:12.520785   64288 api_server.go:103] status: https://192.168.50.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 20:16:12.864263   64288 api_server.go:253] Checking apiserver healthz at https://192.168.50.159:8443/healthz ...
	I1001 20:16:12.868929   64288 api_server.go:279] https://192.168.50.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 20:16:12.868954   64288 api_server.go:103] status: https://192.168.50.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 20:16:13.364647   64288 api_server.go:253] Checking apiserver healthz at https://192.168.50.159:8443/healthz ...
	I1001 20:16:13.384558   64288 api_server.go:279] https://192.168.50.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 20:16:13.384585   64288 api_server.go:103] status: https://192.168.50.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 20:16:13.864093   64288 api_server.go:253] Checking apiserver healthz at https://192.168.50.159:8443/healthz ...
	I1001 20:16:13.868764   64288 api_server.go:279] https://192.168.50.159:8443/healthz returned 200:
	ok
	I1001 20:16:13.874905   64288 api_server.go:141] control plane version: v1.31.1
	I1001 20:16:13.874931   64288 api_server.go:131] duration metric: took 24.011342493s to wait for apiserver health ...
	I1001 20:16:13.874940   64288 cni.go:84] Creating CNI manager for ""
	I1001 20:16:13.874946   64288 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:16:13.876699   64288 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 20:16:13.878133   64288 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 20:16:13.889246   64288 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 20:16:13.909130   64288 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:16:13.909218   64288 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1001 20:16:13.909239   64288 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1001 20:16:13.919372   64288 system_pods.go:59] 8 kube-system pods found
	I1001 20:16:13.919409   64288 system_pods.go:61] "coredns-7c65d6cfc9-52g6f" [aa524290-6d24-4a59-a08c-90634fcd081f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 20:16:13.919417   64288 system_pods.go:61] "coredns-7c65d6cfc9-hxfck" [006f4c00-1bae-4380-aa6c-91c8e31bc91c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 20:16:13.919426   64288 system_pods.go:61] "etcd-kubernetes-upgrade-869396" [9a9aa25a-3a3b-4532-924c-b4d6cad1b10f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1001 20:16:13.919434   64288 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-869396" [33ab1ecc-b999-483a-ac47-77071f3806d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 20:16:13.919441   64288 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-869396" [a43ba829-086f-4cd2-9519-4b14e05c5a23] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1001 20:16:13.919446   64288 system_pods.go:61] "kube-proxy-j9q29" [dd09e102-100b-41d5-b33b-f97add3713b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1001 20:16:13.919451   64288 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-869396" [21444614-8d47-4072-9c6e-e567a9993563] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 20:16:13.919456   64288 system_pods.go:61] "storage-provisioner" [839a5952-f990-4e0e-988d-1f4028f108ea] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1001 20:16:13.919462   64288 system_pods.go:74] duration metric: took 10.308976ms to wait for pod list to return data ...
	I1001 20:16:13.919469   64288 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:16:13.923179   64288 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:16:13.923208   64288 node_conditions.go:123] node cpu capacity is 2
	I1001 20:16:13.923222   64288 node_conditions.go:105] duration metric: took 3.748753ms to run NodePressure ...
	I1001 20:16:13.923238   64288 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:16:14.233141   64288 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 20:16:14.244265   64288 ops.go:34] apiserver oom_adj: -16
	I1001 20:16:14.244287   64288 kubeadm.go:597] duration metric: took 41.101226422s to restartPrimaryControlPlane
	I1001 20:16:14.244295   64288 kubeadm.go:394] duration metric: took 41.277040245s to StartCluster
	I1001 20:16:14.244315   64288 settings.go:142] acquiring lock: {Name:mkeb3fe64ed992373f048aef24eb0d675bcab60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:16:14.244384   64288 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:16:14.245696   64288 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/kubeconfig: {Name:mk7ef19d546928001abf2478a3abe1d17765a591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:16:14.245921   64288 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.159 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 20:16:14.245975   64288 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 20:16:14.246071   64288 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-869396"
	I1001 20:16:14.246089   64288 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-869396"
	W1001 20:16:14.246100   64288 addons.go:243] addon storage-provisioner should already be in state true
	I1001 20:16:14.246106   64288 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-869396"
	I1001 20:16:14.246121   64288 config.go:182] Loaded profile config "kubernetes-upgrade-869396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:16:14.246134   64288 host.go:66] Checking if "kubernetes-upgrade-869396" exists ...
	I1001 20:16:14.246135   64288 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-869396"
	I1001 20:16:14.246455   64288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:16:14.246484   64288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:16:14.246519   64288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:16:14.246551   64288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:16:14.247434   64288 out.go:177] * Verifying Kubernetes components...
	I1001 20:16:14.248735   64288 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:16:14.262353   64288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33067
	I1001 20:16:14.262360   64288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40373
	I1001 20:16:14.262869   64288 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:16:14.262891   64288 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:16:14.263363   64288 main.go:141] libmachine: Using API Version  1
	I1001 20:16:14.263380   64288 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:16:14.263393   64288 main.go:141] libmachine: Using API Version  1
	I1001 20:16:14.263407   64288 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:16:14.263764   64288 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:16:14.263784   64288 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:16:14.263943   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetState
	I1001 20:16:14.264327   64288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:16:14.264378   64288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:16:14.267063   64288 kapi.go:59] client config for kubernetes-upgrade-869396: &rest.Config{Host:"https://192.168.50.159:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/client.crt", KeyFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kubernetes-upgrade-869396/client.key", CAFile:"/home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil
), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 20:16:14.267394   64288 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-869396"
	W1001 20:16:14.267416   64288 addons.go:243] addon default-storageclass should already be in state true
	I1001 20:16:14.267447   64288 host.go:66] Checking if "kubernetes-upgrade-869396" exists ...
	I1001 20:16:14.267832   64288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:16:14.267874   64288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:16:14.279902   64288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36783
	I1001 20:16:14.280430   64288 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:16:14.280919   64288 main.go:141] libmachine: Using API Version  1
	I1001 20:16:14.280941   64288 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:16:14.281343   64288 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:16:14.281540   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetState
	I1001 20:16:14.283198   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .DriverName
	I1001 20:16:14.283230   64288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45629
	I1001 20:16:14.283604   64288 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:16:14.284018   64288 main.go:141] libmachine: Using API Version  1
	I1001 20:16:14.284045   64288 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:16:14.284328   64288 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:16:14.284799   64288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:16:14.284834   64288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:16:14.284877   64288 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:16:14.286068   64288 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:16:14.286089   64288 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 20:16:14.286109   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHHostname
	I1001 20:16:14.289041   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:16:14.289496   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:ea", ip: ""} in network mk-kubernetes-upgrade-869396: {Iface:virbr2 ExpiryTime:2024-10-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:0b:26:ea Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:kubernetes-upgrade-869396 Clientid:01:52:54:00:0b:26:ea}
	I1001 20:16:14.289526   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined IP address 192.168.50.159 and MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:16:14.289652   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHPort
	I1001 20:16:14.289866   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHKeyPath
	I1001 20:16:14.290048   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHUsername
	I1001 20:16:14.290186   64288 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/kubernetes-upgrade-869396/id_rsa Username:docker}
	I1001 20:16:14.300284   64288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34253
	I1001 20:16:14.300805   64288 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:16:14.301301   64288 main.go:141] libmachine: Using API Version  1
	I1001 20:16:14.301322   64288 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:16:14.301642   64288 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:16:14.301841   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetState
	I1001 20:16:14.303245   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .DriverName
	I1001 20:16:14.303565   64288 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 20:16:14.303581   64288 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 20:16:14.303595   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHHostname
	I1001 20:16:14.306506   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:16:14.306867   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:ea", ip: ""} in network mk-kubernetes-upgrade-869396: {Iface:virbr2 ExpiryTime:2024-10-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:0b:26:ea Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:kubernetes-upgrade-869396 Clientid:01:52:54:00:0b:26:ea}
	I1001 20:16:14.306902   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | domain kubernetes-upgrade-869396 has defined IP address 192.168.50.159 and MAC address 52:54:00:0b:26:ea in network mk-kubernetes-upgrade-869396
	I1001 20:16:14.307098   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHPort
	I1001 20:16:14.307267   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHKeyPath
	I1001 20:16:14.307382   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .GetSSHUsername
	I1001 20:16:14.308095   64288 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/kubernetes-upgrade-869396/id_rsa Username:docker}
	I1001 20:16:14.427434   64288 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:16:14.449574   64288 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:16:14.449675   64288 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:16:14.468580   64288 api_server.go:72] duration metric: took 222.620981ms to wait for apiserver process to appear ...
	I1001 20:16:14.468610   64288 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:16:14.468639   64288 api_server.go:253] Checking apiserver healthz at https://192.168.50.159:8443/healthz ...
	I1001 20:16:14.479337   64288 api_server.go:279] https://192.168.50.159:8443/healthz returned 200:
	ok
	I1001 20:16:14.480567   64288 api_server.go:141] control plane version: v1.31.1
	I1001 20:16:14.480591   64288 api_server.go:131] duration metric: took 11.974206ms to wait for apiserver health ...
	I1001 20:16:14.480599   64288 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:16:14.488479   64288 system_pods.go:59] 8 kube-system pods found
	I1001 20:16:14.488521   64288 system_pods.go:61] "coredns-7c65d6cfc9-52g6f" [aa524290-6d24-4a59-a08c-90634fcd081f] Running
	I1001 20:16:14.488527   64288 system_pods.go:61] "coredns-7c65d6cfc9-hxfck" [006f4c00-1bae-4380-aa6c-91c8e31bc91c] Running
	I1001 20:16:14.488539   64288 system_pods.go:61] "etcd-kubernetes-upgrade-869396" [9a9aa25a-3a3b-4532-924c-b4d6cad1b10f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1001 20:16:14.488551   64288 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-869396" [33ab1ecc-b999-483a-ac47-77071f3806d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 20:16:14.488563   64288 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-869396" [a43ba829-086f-4cd2-9519-4b14e05c5a23] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1001 20:16:14.488575   64288 system_pods.go:61] "kube-proxy-j9q29" [dd09e102-100b-41d5-b33b-f97add3713b5] Running
	I1001 20:16:14.488585   64288 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-869396" [21444614-8d47-4072-9c6e-e567a9993563] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 20:16:14.488589   64288 system_pods.go:61] "storage-provisioner" [839a5952-f990-4e0e-988d-1f4028f108ea] Running
	I1001 20:16:14.488630   64288 system_pods.go:74] duration metric: took 8.024406ms to wait for pod list to return data ...
	I1001 20:16:14.488640   64288 kubeadm.go:582] duration metric: took 242.689645ms to wait for: map[apiserver:true system_pods:true]
	I1001 20:16:14.488658   64288 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:16:14.499115   64288 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:16:14.499150   64288 node_conditions.go:123] node cpu capacity is 2
	I1001 20:16:14.499162   64288 node_conditions.go:105] duration metric: took 10.498717ms to run NodePressure ...
	I1001 20:16:14.499177   64288 start.go:241] waiting for startup goroutines ...
	I1001 20:16:14.544516   64288 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:16:14.557549   64288 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 20:16:15.214722   64288 main.go:141] libmachine: Making call to close driver server
	I1001 20:16:15.214758   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .Close
	I1001 20:16:15.214822   64288 main.go:141] libmachine: Making call to close driver server
	I1001 20:16:15.214841   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .Close
	I1001 20:16:15.215121   64288 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:16:15.215123   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | Closing plugin on server side
	I1001 20:16:15.215137   64288 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:16:15.215193   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | Closing plugin on server side
	I1001 20:16:15.215236   64288 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:16:15.215247   64288 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:16:15.215254   64288 main.go:141] libmachine: Making call to close driver server
	I1001 20:16:15.215268   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .Close
	I1001 20:16:15.215256   64288 main.go:141] libmachine: Making call to close driver server
	I1001 20:16:15.215288   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .Close
	I1001 20:16:15.215476   64288 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:16:15.215490   64288 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:16:15.216734   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | Closing plugin on server side
	I1001 20:16:15.216738   64288 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:16:15.216756   64288 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:16:15.221391   64288 main.go:141] libmachine: Making call to close driver server
	I1001 20:16:15.221407   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) Calling .Close
	I1001 20:16:15.221623   64288 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:16:15.221641   64288 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:16:15.221640   64288 main.go:141] libmachine: (kubernetes-upgrade-869396) DBG | Closing plugin on server side
	I1001 20:16:15.223309   64288 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1001 20:16:15.224518   64288 addons.go:510] duration metric: took 978.544775ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1001 20:16:15.224555   64288 start.go:246] waiting for cluster config update ...
	I1001 20:16:15.224569   64288 start.go:255] writing updated cluster config ...
	I1001 20:16:15.224801   64288 ssh_runner.go:195] Run: rm -f paused
	I1001 20:16:15.274208   64288 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 20:16:15.275691   64288 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-869396" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.016745430Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d68796ca-39b6-4d35-86f9-8e22944a4910 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.017818821Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=320caeac-f46e-4bc9-a10a-e21fd29eb186 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.018174691Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727813776018151432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=320caeac-f46e-4bc9-a10a-e21fd29eb186 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.018728221Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c262e3a7-1cac-47f5-aca8-14c7f8302052 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.018829216Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c262e3a7-1cac-47f5-aca8-14c7f8302052 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.019144298Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6080d5605c38323112caddff5102a86e820bbde149fa2d9d26ecdaf554ce3db9,PodSandboxId:4f9cf79e2a48fddc79039775f705129d418e980aa00305fbdc1d1e6b6e6d91a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727813773155739012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-52g6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa524290-6d24-4a59-a08c-90634fcd081f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bac588eb7ec7021824b68efd69694cdd9761eb019e9e0a525e0a7fce566ae336,PodSandboxId:8a417676d0f34788171806c840c420cd1668b3dbab3e8cdc2e3d1aa5b4848df0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727813773158092650,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 839a5952-f990-4e0e-988d-1f4028f108ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d149c8443ff355a426f5ee2228c0cbcf8fb44f81dfe0d3fce96416c90eaf36,PodSandboxId:72183575ca03b58ffb0ddb6398f67a8cfde11ded180ccd98181ba67993e04080,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727813773163684345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j9q29,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dd09e102-100b-41d5-b33b-f97add3713b5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df1326a1ba4ec44fa465cf95f2e8a06dea7703e9e9e7144cfb27257d668a10c0,PodSandboxId:090bb6fbbf422daf5a1c8d50f50c0be37246a12330b00f8192572011bd1ffd4b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727813773132597899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hxfck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006f4c00-1bae-4380-aa6c-91
c8e31bc91c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b421a7c0bf4ad205a39fd629ad99911f3614121e5814cf186563bcc932f03b6f,PodSandboxId:ecb67583aa9f45940f60f9c40cabb516772eefbe2f4929d897b2141c90695d75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727813770068486735,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f73a820237920fad45ebc7935b5d1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09d79de291c7123e90ff48ed944ee54f9a13de2f019bf72a50c3c8b695e87544,PodSandboxId:26e08141ca3b893c5df35a8f231a9270eecb784f08695f68a80328fb90d3203b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727813769801549360,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 373ee18974ef821159bb84891aedebfa,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9171b7f213b9321e885e71f864651d991c3eb8cf0c7d9fc41dffb8168c1a0,PodSandboxId:38837a1758a5326ccabef003cafb0578ba2c857f26d0b340d8a5515d0f1625ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727813769819444266,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548d4dce00356a8facc68847b65515e3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d506c39d1c17b1b4ca135d859b3747277e897ab10da2bbc138a4431c21e64f32,PodSandboxId:f6d7689dbeebe04329cb99a8d2c387b46c14355ea25ab94bbdf963aadd00ebb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727813769791156625,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 992ace412aef59cc07e3b2ef7637325b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ffeffc09e8c235e12960cf69d1d43770de2bfee43391240f3856aba8c54ea19,PodSandboxId:ecb67583aa9f45940f60f9c40cabb516772eefbe2f4929d897b2141c90695d75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727813747938342220,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f73a820237920fad45ebc7935b5d1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27a359c056f03f4de1853b931fea7af34ebe8e4ab437025a9ff46c5b5c9e713,PodSandboxId:8a417676d0f34788171806c840c420cd1668b3dbab3e8cdc2e3d1aa5b4848df0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727813746940374094,Labels:map[string]string{
io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 839a5952-f990-4e0e-988d-1f4028f108ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d31d03f256e2c54d5b692c5193b717e8dc4bad021b1235776e66e0385fce0cfb,PodSandboxId:090bb6fbbf422daf5a1c8d50f50c0be37246a12330b00f8192572011bd1ffd4b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727813732998451995,Labels:map[string]string{io.kubernetes.container.n
ame: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hxfck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006f4c00-1bae-4380-aa6c-91c8e31bc91c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4821392c6c84de53d6e19912721fbcbbd528e7d3e6bc3fbc44671127d76b48d2,PodSandboxId:4f9cf79e2a48fddc79039775f705129d418e980aa00305fbdc1d1e6b6e6d91a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727813732948951567,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-52g6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa524290-6d24-4a59-a08c-90634fcd081f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e672ef7211761f50e1a6a7c3d99c934286c3c4b9266e3e18297cfa20f96263,PodSandboxId:5d95d3e4d42409e3553fe9d1fac81339d8430c3a3862b7221a1a3aa4ab2e3c2a,
Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727813730828007765,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548d4dce00356a8facc68847b65515e3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b00af3fcecb8aa1dc1a66947db4b8e451615b86befdedb2df8ae7e9ea12e36fc,PodSandboxId:6c715afd4d0ed84eee91d29811efc18b062b7e6a3940
9071e56e5e78db1cce5d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727813730379699477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j9q29,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd09e102-100b-41d5-b33b-f97add3713b5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c88306b7aaad8c10b81810a08c4cb346304ae878e859d3bbaaa40b7bcfcc47f1,PodSandboxId:5a7f46f7c3a64dca5268600ab2c6b9b0af477ee10260a8cbdefa11d10f709d4c,Metadata:&Con
tainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727813730536317341,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 373ee18974ef821159bb84891aedebfa,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4f5c956a447ed2ca8be8f604f4f98d3ef05e190b6b82d58a377fb76c2de98b,PodSandboxId:2bd1eb82247748832dcc3e1a7e3442622f4b9d115f57223960c9295b76f7b21d,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727813730594374511,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 992ace412aef59cc07e3b2ef7637325b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c262e3a7-1cac-47f5-aca8-14c7f8302052 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.067627889Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=472b4d55-c4d4-4cc5-be44-81149c5e5b9e name=/runtime.v1.RuntimeService/Version
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.067711679Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=472b4d55-c4d4-4cc5-be44-81149c5e5b9e name=/runtime.v1.RuntimeService/Version
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.068871300Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=36e2a006-eb28-423a-a80b-b5e72b81737b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.069263072Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727813776069240567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36e2a006-eb28-423a-a80b-b5e72b81737b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.069850213Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=664926de-b59c-40a0-9064-f1ef994bcb11 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.069916775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=664926de-b59c-40a0-9064-f1ef994bcb11 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.070496864Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6080d5605c38323112caddff5102a86e820bbde149fa2d9d26ecdaf554ce3db9,PodSandboxId:4f9cf79e2a48fddc79039775f705129d418e980aa00305fbdc1d1e6b6e6d91a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727813773155739012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-52g6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa524290-6d24-4a59-a08c-90634fcd081f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bac588eb7ec7021824b68efd69694cdd9761eb019e9e0a525e0a7fce566ae336,PodSandboxId:8a417676d0f34788171806c840c420cd1668b3dbab3e8cdc2e3d1aa5b4848df0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727813773158092650,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 839a5952-f990-4e0e-988d-1f4028f108ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d149c8443ff355a426f5ee2228c0cbcf8fb44f81dfe0d3fce96416c90eaf36,PodSandboxId:72183575ca03b58ffb0ddb6398f67a8cfde11ded180ccd98181ba67993e04080,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727813773163684345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j9q29,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dd09e102-100b-41d5-b33b-f97add3713b5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df1326a1ba4ec44fa465cf95f2e8a06dea7703e9e9e7144cfb27257d668a10c0,PodSandboxId:090bb6fbbf422daf5a1c8d50f50c0be37246a12330b00f8192572011bd1ffd4b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727813773132597899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hxfck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006f4c00-1bae-4380-aa6c-91
c8e31bc91c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b421a7c0bf4ad205a39fd629ad99911f3614121e5814cf186563bcc932f03b6f,PodSandboxId:ecb67583aa9f45940f60f9c40cabb516772eefbe2f4929d897b2141c90695d75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727813770068486735,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f73a820237920fad45ebc7935b5d1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09d79de291c7123e90ff48ed944ee54f9a13de2f019bf72a50c3c8b695e87544,PodSandboxId:26e08141ca3b893c5df35a8f231a9270eecb784f08695f68a80328fb90d3203b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727813769801549360,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 373ee18974ef821159bb84891aedebfa,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9171b7f213b9321e885e71f864651d991c3eb8cf0c7d9fc41dffb8168c1a0,PodSandboxId:38837a1758a5326ccabef003cafb0578ba2c857f26d0b340d8a5515d0f1625ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727813769819444266,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548d4dce00356a8facc68847b65515e3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d506c39d1c17b1b4ca135d859b3747277e897ab10da2bbc138a4431c21e64f32,PodSandboxId:f6d7689dbeebe04329cb99a8d2c387b46c14355ea25ab94bbdf963aadd00ebb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727813769791156625,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 992ace412aef59cc07e3b2ef7637325b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ffeffc09e8c235e12960cf69d1d43770de2bfee43391240f3856aba8c54ea19,PodSandboxId:ecb67583aa9f45940f60f9c40cabb516772eefbe2f4929d897b2141c90695d75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727813747938342220,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f73a820237920fad45ebc7935b5d1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27a359c056f03f4de1853b931fea7af34ebe8e4ab437025a9ff46c5b5c9e713,PodSandboxId:8a417676d0f34788171806c840c420cd1668b3dbab3e8cdc2e3d1aa5b4848df0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727813746940374094,Labels:map[string]string{
io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 839a5952-f990-4e0e-988d-1f4028f108ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d31d03f256e2c54d5b692c5193b717e8dc4bad021b1235776e66e0385fce0cfb,PodSandboxId:090bb6fbbf422daf5a1c8d50f50c0be37246a12330b00f8192572011bd1ffd4b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727813732998451995,Labels:map[string]string{io.kubernetes.container.n
ame: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hxfck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006f4c00-1bae-4380-aa6c-91c8e31bc91c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4821392c6c84de53d6e19912721fbcbbd528e7d3e6bc3fbc44671127d76b48d2,PodSandboxId:4f9cf79e2a48fddc79039775f705129d418e980aa00305fbdc1d1e6b6e6d91a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727813732948951567,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-52g6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa524290-6d24-4a59-a08c-90634fcd081f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e672ef7211761f50e1a6a7c3d99c934286c3c4b9266e3e18297cfa20f96263,PodSandboxId:5d95d3e4d42409e3553fe9d1fac81339d8430c3a3862b7221a1a3aa4ab2e3c2a,
Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727813730828007765,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548d4dce00356a8facc68847b65515e3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b00af3fcecb8aa1dc1a66947db4b8e451615b86befdedb2df8ae7e9ea12e36fc,PodSandboxId:6c715afd4d0ed84eee91d29811efc18b062b7e6a3940
9071e56e5e78db1cce5d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727813730379699477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j9q29,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd09e102-100b-41d5-b33b-f97add3713b5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c88306b7aaad8c10b81810a08c4cb346304ae878e859d3bbaaa40b7bcfcc47f1,PodSandboxId:5a7f46f7c3a64dca5268600ab2c6b9b0af477ee10260a8cbdefa11d10f709d4c,Metadata:&Con
tainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727813730536317341,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 373ee18974ef821159bb84891aedebfa,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4f5c956a447ed2ca8be8f604f4f98d3ef05e190b6b82d58a377fb76c2de98b,PodSandboxId:2bd1eb82247748832dcc3e1a7e3442622f4b9d115f57223960c9295b76f7b21d,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727813730594374511,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 992ace412aef59cc07e3b2ef7637325b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=664926de-b59c-40a0-9064-f1ef994bcb11 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.096448333Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=84c0d9f0-d585-4bfa-b8ee-effeb0bc4102 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.096798585Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f6d7689dbeebe04329cb99a8d2c387b46c14355ea25ab94bbdf963aadd00ebb5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-869396,Uid:992ace412aef59cc07e3b2ef7637325b,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727813732491739604,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 992ace412aef59cc07e3b2ef7637325b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 992ace412aef59cc07e3b2ef7637325b,kubernetes.io/config.seen: 2024-10-01T20:14:59.869382867Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:26e08141ca3b893c5df35a8f231a9270eecb784f08695f68a80328fb90d3203b,Metadata:&PodSandboxMetad
ata{Name:etcd-kubernetes-upgrade-869396,Uid:373ee18974ef821159bb84891aedebfa,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727813732481249111,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 373ee18974ef821159bb84891aedebfa,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.159:2379,kubernetes.io/config.hash: 373ee18974ef821159bb84891aedebfa,kubernetes.io/config.seen: 2024-10-01T20:14:59.936565470Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ecb67583aa9f45940f60f9c40cabb516772eefbe2f4929d897b2141c90695d75,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-869396,Uid:5f73a820237920fad45ebc7935b5d1b5,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727813732479113560,Labels:map[string]string{component: kube-apiserver,io.kube
rnetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f73a820237920fad45ebc7935b5d1b5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.159:8443,kubernetes.io/config.hash: 5f73a820237920fad45ebc7935b5d1b5,kubernetes.io/config.seen: 2024-10-01T20:14:59.869383898Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4f9cf79e2a48fddc79039775f705129d418e980aa00305fbdc1d1e6b6e6d91a4,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-52g6f,Uid:aa524290-6d24-4a59-a08c-90634fcd081f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727813732440902241,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-52g6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa524290-6d24-4a59-a08c-90634fcd081f,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Ann
otations:map[string]string{kubernetes.io/config.seen: 2024-10-01T20:15:15.870127171Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:090bb6fbbf422daf5a1c8d50f50c0be37246a12330b00f8192572011bd1ffd4b,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxfck,Uid:006f4c00-1bae-4380-aa6c-91c8e31bc91c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727813732412533702,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-hxfck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006f4c00-1bae-4380-aa6c-91c8e31bc91c,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-01T20:15:15.828678035Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8a417676d0f34788171806c840c420cd1668b3dbab3e8cdc2e3d1aa5b4848df0,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:839a5952-f990-4e0e-988d-1f4028f108ea,Namespace:kube-system,Attempt:2,},State:SANDBOX
_READY,CreatedAt:1727813732388183203,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 839a5952-f990-4e0e-988d-1f4028f108ea,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"typ
e\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-01T20:15:16.650010723Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:38837a1758a5326ccabef003cafb0578ba2c857f26d0b340d8a5515d0f1625ed,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-869396,Uid:548d4dce00356a8facc68847b65515e3,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727813732301191934,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548d4dce00356a8facc68847b65515e3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 548d4dce00356a8facc68847b65515e3,kubernetes.io/config.seen: 2024-10-01T20:14:59.869379505Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:72183575ca03b58ffb0ddb6398f67a8cfde11ded180ccd98181ba67993e04080,Metadata:&PodSand
boxMetadata{Name:kube-proxy-j9q29,Uid:dd09e102-100b-41d5-b33b-f97add3713b5,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727813732300498066,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-j9q29,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd09e102-100b-41d5-b33b-f97add3713b5,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-01T20:15:15.906177129Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5d95d3e4d42409e3553fe9d1fac81339d8430c3a3862b7221a1a3aa4ab2e3c2a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-869396,Uid:548d4dce00356a8facc68847b65515e3,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1727813730352989241,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kuberne
tes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548d4dce00356a8facc68847b65515e3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 548d4dce00356a8facc68847b65515e3,kubernetes.io/config.seen: 2024-10-01T20:14:59.869379505Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6c0be6f7b8711bd25fdfccf979837b784ac19123a82b90f44f1c86e2a418dd13,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-869396,Uid:5f73a820237920fad45ebc7935b5d1b5,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1727813730221395559,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f73a820237920fad45ebc7935b5d1b5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.159:8443,kubernetes.io/config.
hash: 5f73a820237920fad45ebc7935b5d1b5,kubernetes.io/config.seen: 2024-10-01T20:14:59.869383898Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5a7f46f7c3a64dca5268600ab2c6b9b0af477ee10260a8cbdefa11d10f709d4c,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-869396,Uid:373ee18974ef821159bb84891aedebfa,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1727813730202583805,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 373ee18974ef821159bb84891aedebfa,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.159:2379,kubernetes.io/config.hash: 373ee18974ef821159bb84891aedebfa,kubernetes.io/config.seen: 2024-10-01T20:14:59.936565470Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2bd1eb82247748832dcc3e1a7e3442622f4b9d115f57223960c9295b76f7
b21d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-869396,Uid:992ace412aef59cc07e3b2ef7637325b,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1727813730182611386,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 992ace412aef59cc07e3b2ef7637325b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 992ace412aef59cc07e3b2ef7637325b,kubernetes.io/config.seen: 2024-10-01T20:14:59.869382867Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6c715afd4d0ed84eee91d29811efc18b062b7e6a39409071e56e5e78db1cce5d,Metadata:&PodSandboxMetadata{Name:kube-proxy-j9q29,Uid:dd09e102-100b-41d5-b33b-f97add3713b5,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1727813730078304948,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name
: POD,io.kubernetes.pod.name: kube-proxy-j9q29,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd09e102-100b-41d5-b33b-f97add3713b5,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-01T20:15:15.906177129Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=84c0d9f0-d585-4bfa-b8ee-effeb0bc4102 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.097536290Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b00d71d-0b4d-4ff9-9e8a-b826a0f21cf2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.097594858Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b00d71d-0b4d-4ff9-9e8a-b826a0f21cf2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.097962605Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6080d5605c38323112caddff5102a86e820bbde149fa2d9d26ecdaf554ce3db9,PodSandboxId:4f9cf79e2a48fddc79039775f705129d418e980aa00305fbdc1d1e6b6e6d91a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727813773155739012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-52g6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa524290-6d24-4a59-a08c-90634fcd081f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bac588eb7ec7021824b68efd69694cdd9761eb019e9e0a525e0a7fce566ae336,PodSandboxId:8a417676d0f34788171806c840c420cd1668b3dbab3e8cdc2e3d1aa5b4848df0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727813773158092650,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 839a5952-f990-4e0e-988d-1f4028f108ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d149c8443ff355a426f5ee2228c0cbcf8fb44f81dfe0d3fce96416c90eaf36,PodSandboxId:72183575ca03b58ffb0ddb6398f67a8cfde11ded180ccd98181ba67993e04080,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727813773163684345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j9q29,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dd09e102-100b-41d5-b33b-f97add3713b5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df1326a1ba4ec44fa465cf95f2e8a06dea7703e9e9e7144cfb27257d668a10c0,PodSandboxId:090bb6fbbf422daf5a1c8d50f50c0be37246a12330b00f8192572011bd1ffd4b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727813773132597899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hxfck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006f4c00-1bae-4380-aa6c-91
c8e31bc91c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b421a7c0bf4ad205a39fd629ad99911f3614121e5814cf186563bcc932f03b6f,PodSandboxId:ecb67583aa9f45940f60f9c40cabb516772eefbe2f4929d897b2141c90695d75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727813770068486735,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f73a820237920fad45ebc7935b5d1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09d79de291c7123e90ff48ed944ee54f9a13de2f019bf72a50c3c8b695e87544,PodSandboxId:26e08141ca3b893c5df35a8f231a9270eecb784f08695f68a80328fb90d3203b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727813769801549360,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 373ee18974ef821159bb84891aedebfa,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9171b7f213b9321e885e71f864651d991c3eb8cf0c7d9fc41dffb8168c1a0,PodSandboxId:38837a1758a5326ccabef003cafb0578ba2c857f26d0b340d8a5515d0f1625ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727813769819444266,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548d4dce00356a8facc68847b65515e3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d506c39d1c17b1b4ca135d859b3747277e897ab10da2bbc138a4431c21e64f32,PodSandboxId:f6d7689dbeebe04329cb99a8d2c387b46c14355ea25ab94bbdf963aadd00ebb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727813769791156625,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 992ace412aef59cc07e3b2ef7637325b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ffeffc09e8c235e12960cf69d1d43770de2bfee43391240f3856aba8c54ea19,PodSandboxId:ecb67583aa9f45940f60f9c40cabb516772eefbe2f4929d897b2141c90695d75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727813747938342220,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f73a820237920fad45ebc7935b5d1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27a359c056f03f4de1853b931fea7af34ebe8e4ab437025a9ff46c5b5c9e713,PodSandboxId:8a417676d0f34788171806c840c420cd1668b3dbab3e8cdc2e3d1aa5b4848df0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727813746940374094,Labels:map[string]string{
io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 839a5952-f990-4e0e-988d-1f4028f108ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d31d03f256e2c54d5b692c5193b717e8dc4bad021b1235776e66e0385fce0cfb,PodSandboxId:090bb6fbbf422daf5a1c8d50f50c0be37246a12330b00f8192572011bd1ffd4b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727813732998451995,Labels:map[string]string{io.kubernetes.container.n
ame: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hxfck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006f4c00-1bae-4380-aa6c-91c8e31bc91c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4821392c6c84de53d6e19912721fbcbbd528e7d3e6bc3fbc44671127d76b48d2,PodSandboxId:4f9cf79e2a48fddc79039775f705129d418e980aa00305fbdc1d1e6b6e6d91a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727813732948951567,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-52g6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa524290-6d24-4a59-a08c-90634fcd081f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e672ef7211761f50e1a6a7c3d99c934286c3c4b9266e3e18297cfa20f96263,PodSandboxId:5d95d3e4d42409e3553fe9d1fac81339d8430c3a3862b7221a1a3aa4ab2e3c2a,
Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727813730828007765,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548d4dce00356a8facc68847b65515e3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b00af3fcecb8aa1dc1a66947db4b8e451615b86befdedb2df8ae7e9ea12e36fc,PodSandboxId:6c715afd4d0ed84eee91d29811efc18b062b7e6a3940
9071e56e5e78db1cce5d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727813730379699477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j9q29,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd09e102-100b-41d5-b33b-f97add3713b5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c88306b7aaad8c10b81810a08c4cb346304ae878e859d3bbaaa40b7bcfcc47f1,PodSandboxId:5a7f46f7c3a64dca5268600ab2c6b9b0af477ee10260a8cbdefa11d10f709d4c,Metadata:&Con
tainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727813730536317341,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 373ee18974ef821159bb84891aedebfa,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4f5c956a447ed2ca8be8f604f4f98d3ef05e190b6b82d58a377fb76c2de98b,PodSandboxId:2bd1eb82247748832dcc3e1a7e3442622f4b9d115f57223960c9295b76f7b21d,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727813730594374511,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 992ace412aef59cc07e3b2ef7637325b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b00d71d-0b4d-4ff9-9e8a-b826a0f21cf2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.122278243Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f0c83c3a-2343-4066-ae34-b53c54d3af6c name=/runtime.v1.RuntimeService/Version
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.122363893Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f0c83c3a-2343-4066-ae34-b53c54d3af6c name=/runtime.v1.RuntimeService/Version
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.123461235Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=101ecedf-291b-4028-8453-731096ecc1fd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.123881361Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727813776123857542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=101ecedf-291b-4028-8453-731096ecc1fd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.124635731Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=18cb5aff-5c61-4222-96a1-9e4cce7e4239 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.124691884Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=18cb5aff-5c61-4222-96a1-9e4cce7e4239 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:16:16 kubernetes-upgrade-869396 crio[2867]: time="2024-10-01 20:16:16.125066271Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6080d5605c38323112caddff5102a86e820bbde149fa2d9d26ecdaf554ce3db9,PodSandboxId:4f9cf79e2a48fddc79039775f705129d418e980aa00305fbdc1d1e6b6e6d91a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727813773155739012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-52g6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa524290-6d24-4a59-a08c-90634fcd081f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bac588eb7ec7021824b68efd69694cdd9761eb019e9e0a525e0a7fce566ae336,PodSandboxId:8a417676d0f34788171806c840c420cd1668b3dbab3e8cdc2e3d1aa5b4848df0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727813773158092650,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 839a5952-f990-4e0e-988d-1f4028f108ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d149c8443ff355a426f5ee2228c0cbcf8fb44f81dfe0d3fce96416c90eaf36,PodSandboxId:72183575ca03b58ffb0ddb6398f67a8cfde11ded180ccd98181ba67993e04080,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727813773163684345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j9q29,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dd09e102-100b-41d5-b33b-f97add3713b5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df1326a1ba4ec44fa465cf95f2e8a06dea7703e9e9e7144cfb27257d668a10c0,PodSandboxId:090bb6fbbf422daf5a1c8d50f50c0be37246a12330b00f8192572011bd1ffd4b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727813773132597899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hxfck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006f4c00-1bae-4380-aa6c-91
c8e31bc91c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b421a7c0bf4ad205a39fd629ad99911f3614121e5814cf186563bcc932f03b6f,PodSandboxId:ecb67583aa9f45940f60f9c40cabb516772eefbe2f4929d897b2141c90695d75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727813770068486735,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f73a820237920fad45ebc7935b5d1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09d79de291c7123e90ff48ed944ee54f9a13de2f019bf72a50c3c8b695e87544,PodSandboxId:26e08141ca3b893c5df35a8f231a9270eecb784f08695f68a80328fb90d3203b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727813769801549360,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 373ee18974ef821159bb84891aedebfa,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9171b7f213b9321e885e71f864651d991c3eb8cf0c7d9fc41dffb8168c1a0,PodSandboxId:38837a1758a5326ccabef003cafb0578ba2c857f26d0b340d8a5515d0f1625ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727813769819444266,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548d4dce00356a8facc68847b65515e3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d506c39d1c17b1b4ca135d859b3747277e897ab10da2bbc138a4431c21e64f32,PodSandboxId:f6d7689dbeebe04329cb99a8d2c387b46c14355ea25ab94bbdf963aadd00ebb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727813769791156625,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 992ace412aef59cc07e3b2ef7637325b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ffeffc09e8c235e12960cf69d1d43770de2bfee43391240f3856aba8c54ea19,PodSandboxId:ecb67583aa9f45940f60f9c40cabb516772eefbe2f4929d897b2141c90695d75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727813747938342220,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f73a820237920fad45ebc7935b5d1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27a359c056f03f4de1853b931fea7af34ebe8e4ab437025a9ff46c5b5c9e713,PodSandboxId:8a417676d0f34788171806c840c420cd1668b3dbab3e8cdc2e3d1aa5b4848df0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727813746940374094,Labels:map[string]string{
io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 839a5952-f990-4e0e-988d-1f4028f108ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d31d03f256e2c54d5b692c5193b717e8dc4bad021b1235776e66e0385fce0cfb,PodSandboxId:090bb6fbbf422daf5a1c8d50f50c0be37246a12330b00f8192572011bd1ffd4b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727813732998451995,Labels:map[string]string{io.kubernetes.container.n
ame: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hxfck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006f4c00-1bae-4380-aa6c-91c8e31bc91c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4821392c6c84de53d6e19912721fbcbbd528e7d3e6bc3fbc44671127d76b48d2,PodSandboxId:4f9cf79e2a48fddc79039775f705129d418e980aa00305fbdc1d1e6b6e6d91a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727813732948951567,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-52g6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa524290-6d24-4a59-a08c-90634fcd081f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e672ef7211761f50e1a6a7c3d99c934286c3c4b9266e3e18297cfa20f96263,PodSandboxId:5d95d3e4d42409e3553fe9d1fac81339d8430c3a3862b7221a1a3aa4ab2e3c2a,
Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727813730828007765,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548d4dce00356a8facc68847b65515e3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b00af3fcecb8aa1dc1a66947db4b8e451615b86befdedb2df8ae7e9ea12e36fc,PodSandboxId:6c715afd4d0ed84eee91d29811efc18b062b7e6a3940
9071e56e5e78db1cce5d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727813730379699477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j9q29,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd09e102-100b-41d5-b33b-f97add3713b5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c88306b7aaad8c10b81810a08c4cb346304ae878e859d3bbaaa40b7bcfcc47f1,PodSandboxId:5a7f46f7c3a64dca5268600ab2c6b9b0af477ee10260a8cbdefa11d10f709d4c,Metadata:&Con
tainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727813730536317341,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 373ee18974ef821159bb84891aedebfa,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4f5c956a447ed2ca8be8f604f4f98d3ef05e190b6b82d58a377fb76c2de98b,PodSandboxId:2bd1eb82247748832dcc3e1a7e3442622f4b9d115f57223960c9295b76f7b21d,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727813730594374511,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-869396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 992ace412aef59cc07e3b2ef7637325b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=18cb5aff-5c61-4222-96a1-9e4cce7e4239 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d3d149c8443ff       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   3 seconds ago       Running             kube-proxy                2                   72183575ca03b       kube-proxy-j9q29
	bac588eb7ec70       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       3                   8a417676d0f34       storage-provisioner
	6080d5605c383       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   4f9cf79e2a48f       coredns-7c65d6cfc9-52g6f
	df1326a1ba4ec       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   090bb6fbbf422       coredns-7c65d6cfc9-hxfck
	b421a7c0bf4ad       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   6 seconds ago       Running             kube-apiserver            3                   ecb67583aa9f4       kube-apiserver-kubernetes-upgrade-869396
	e3c9171b7f213       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   6 seconds ago       Running             kube-controller-manager   2                   38837a1758a53       kube-controller-manager-kubernetes-upgrade-869396
	09d79de291c71       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   6 seconds ago       Running             etcd                      2                   26e08141ca3b8       etcd-kubernetes-upgrade-869396
	d506c39d1c17b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   6 seconds ago       Running             kube-scheduler            2                   f6d7689dbeebe       kube-scheduler-kubernetes-upgrade-869396
	2ffeffc09e8c2       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   28 seconds ago      Exited              kube-apiserver            2                   ecb67583aa9f4       kube-apiserver-kubernetes-upgrade-869396
	d27a359c056f0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   29 seconds ago      Exited              storage-provisioner       2                   8a417676d0f34       storage-provisioner
	d31d03f256e2c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   43 seconds ago      Exited              coredns                   1                   090bb6fbbf422       coredns-7c65d6cfc9-hxfck
	4821392c6c84d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   43 seconds ago      Exited              coredns                   1                   4f9cf79e2a48f       coredns-7c65d6cfc9-52g6f
	55e672ef72117       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   45 seconds ago      Exited              kube-controller-manager   1                   5d95d3e4d4240       kube-controller-manager-kubernetes-upgrade-869396
	7b4f5c956a447       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   45 seconds ago      Exited              kube-scheduler            1                   2bd1eb8224774       kube-scheduler-kubernetes-upgrade-869396
	c88306b7aaad8       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   45 seconds ago      Exited              etcd                      1                   5a7f46f7c3a64       etcd-kubernetes-upgrade-869396
	b00af3fcecb8a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   45 seconds ago      Exited              kube-proxy                1                   6c715afd4d0ed       kube-proxy-j9q29
	
	
	==> coredns [4821392c6c84de53d6e19912721fbcbbd528e7d3e6bc3fbc44671127d76b48d2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6080d5605c38323112caddff5102a86e820bbde149fa2d9d26ecdaf554ce3db9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [d31d03f256e2c54d5b692c5193b717e8dc4bad021b1235776e66e0385fce0cfb] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [df1326a1ba4ec44fa465cf95f2e8a06dea7703e9e9e7144cfb27257d668a10c0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-869396
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-869396
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 20:15:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-869396
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 20:16:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 20:16:12 +0000   Tue, 01 Oct 2024 20:15:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 20:16:12 +0000   Tue, 01 Oct 2024 20:15:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 20:16:12 +0000   Tue, 01 Oct 2024 20:15:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 20:16:12 +0000   Tue, 01 Oct 2024 20:15:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.159
	  Hostname:    kubernetes-upgrade-869396
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ea66ecc83704c96ad5dce0ccbe98ba7
	  System UUID:                6ea66ecc-8370-4c96-ad5d-ce0ccbe98ba7
	  Boot ID:                    05773511-d0ad-4cdc-8b80-db3568675836
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-52g6f                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     61s
	  kube-system                 coredns-7c65d6cfc9-hxfck                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     61s
	  kube-system                 etcd-kubernetes-upgrade-869396                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         61s
	  kube-system                 kube-apiserver-kubernetes-upgrade-869396             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-869396    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-j9q29                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-scheduler-kubernetes-upgrade-869396             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 58s                kube-proxy       
	  Normal  Starting                 2s                 kube-proxy       
	  Normal  Starting                 77s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  73s (x8 over 77s)  kubelet          Node kubernetes-upgrade-869396 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    73s (x8 over 77s)  kubelet          Node kubernetes-upgrade-869396 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     73s (x7 over 77s)  kubelet          Node kubernetes-upgrade-869396 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           64s                node-controller  Node kubernetes-upgrade-869396 event: Registered Node kubernetes-upgrade-869396 in Controller
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-869396 event: Registered Node kubernetes-upgrade-869396 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.791801] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.056190] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066762] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.160387] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.150963] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.286809] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +3.914995] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +2.200579] systemd-fstab-generator[832]: Ignoring "noauto" option for root device
	[  +0.065909] kauditd_printk_skb: 158 callbacks suppressed
	[Oct 1 20:15] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.583044] systemd-fstab-generator[1229]: Ignoring "noauto" option for root device
	[ +13.507541] systemd-fstab-generator[2167]: Ignoring "noauto" option for root device
	[  +0.078590] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.056151] systemd-fstab-generator[2179]: Ignoring "noauto" option for root device
	[  +0.167697] systemd-fstab-generator[2193]: Ignoring "noauto" option for root device
	[  +0.167536] systemd-fstab-generator[2205]: Ignoring "noauto" option for root device
	[  +0.804829] systemd-fstab-generator[2570]: Ignoring "noauto" option for root device
	[  +1.149953] systemd-fstab-generator[2984]: Ignoring "noauto" option for root device
	[ +11.560773] kauditd_printk_skb: 286 callbacks suppressed
	[  +6.134262] systemd-fstab-generator[3736]: Ignoring "noauto" option for root device
	[  +0.087286] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 1 20:16] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.251459] systemd-fstab-generator[4287]: Ignoring "noauto" option for root device
	[  +0.101695] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [09d79de291c7123e90ff48ed944ee54f9a13de2f019bf72a50c3c8b695e87544] <==
	{"level":"info","ts":"2024-10-01T20:16:10.071272Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3bfeb9fda0aaae06","local-member-id":"c0548effd504e8e0","added-peer-id":"c0548effd504e8e0","added-peer-peer-urls":["https://192.168.50.159:2380"]}
	{"level":"info","ts":"2024-10-01T20:16:10.071402Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3bfeb9fda0aaae06","local-member-id":"c0548effd504e8e0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T20:16:10.071431Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T20:16:10.072155Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T20:16:10.075214Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-01T20:16:10.075406Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"c0548effd504e8e0","initial-advertise-peer-urls":["https://192.168.50.159:2380"],"listen-peer-urls":["https://192.168.50.159:2380"],"advertise-client-urls":["https://192.168.50.159:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.159:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-01T20:16:10.075442Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-01T20:16:10.075549Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.159:2380"}
	{"level":"info","ts":"2024-10-01T20:16:10.075572Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.159:2380"}
	{"level":"info","ts":"2024-10-01T20:16:11.228009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0548effd504e8e0 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-01T20:16:11.228071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0548effd504e8e0 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-01T20:16:11.228118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0548effd504e8e0 received MsgPreVoteResp from c0548effd504e8e0 at term 2"}
	{"level":"info","ts":"2024-10-01T20:16:11.228131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0548effd504e8e0 became candidate at term 3"}
	{"level":"info","ts":"2024-10-01T20:16:11.228137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0548effd504e8e0 received MsgVoteResp from c0548effd504e8e0 at term 3"}
	{"level":"info","ts":"2024-10-01T20:16:11.228145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0548effd504e8e0 became leader at term 3"}
	{"level":"info","ts":"2024-10-01T20:16:11.228167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c0548effd504e8e0 elected leader c0548effd504e8e0 at term 3"}
	{"level":"info","ts":"2024-10-01T20:16:11.230981Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c0548effd504e8e0","local-member-attributes":"{Name:kubernetes-upgrade-869396 ClientURLs:[https://192.168.50.159:2379]}","request-path":"/0/members/c0548effd504e8e0/attributes","cluster-id":"3bfeb9fda0aaae06","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-01T20:16:11.231140Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T20:16:11.231246Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T20:16:11.231845Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-01T20:16:11.231863Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-01T20:16:11.232226Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T20:16:11.232372Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T20:16:11.233196Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.159:2379"}
	{"level":"info","ts":"2024-10-01T20:16:11.233268Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [c88306b7aaad8c10b81810a08c4cb346304ae878e859d3bbaaa40b7bcfcc47f1] <==
	{"level":"info","ts":"2024-10-01T20:15:31.094386Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-10-01T20:15:31.101730Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"3bfeb9fda0aaae06","local-member-id":"c0548effd504e8e0","commit-index":384}
	{"level":"info","ts":"2024-10-01T20:15:31.102166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0548effd504e8e0 switched to configuration voters=()"}
	{"level":"info","ts":"2024-10-01T20:15:31.102222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0548effd504e8e0 became follower at term 2"}
	{"level":"info","ts":"2024-10-01T20:15:31.102260Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft c0548effd504e8e0 [peers: [], term: 2, commit: 384, applied: 0, lastindex: 384, lastterm: 2]"}
	{"level":"warn","ts":"2024-10-01T20:15:31.104035Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-10-01T20:15:31.110013Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":376}
	{"level":"info","ts":"2024-10-01T20:15:31.120466Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-10-01T20:15:31.124023Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"c0548effd504e8e0","timeout":"7s"}
	{"level":"info","ts":"2024-10-01T20:15:31.124431Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"c0548effd504e8e0"}
	{"level":"info","ts":"2024-10-01T20:15:31.124523Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"c0548effd504e8e0","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-10-01T20:15:31.124858Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-10-01T20:15:31.125044Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-01T20:15:31.125123Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-01T20:15:31.125132Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-01T20:15:31.125519Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0548effd504e8e0 switched to configuration voters=(13858859182767532256)"}
	{"level":"info","ts":"2024-10-01T20:15:31.125696Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3bfeb9fda0aaae06","local-member-id":"c0548effd504e8e0","added-peer-id":"c0548effd504e8e0","added-peer-peer-urls":["https://192.168.50.159:2380"]}
	{"level":"info","ts":"2024-10-01T20:15:31.125920Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3bfeb9fda0aaae06","local-member-id":"c0548effd504e8e0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T20:15:31.126014Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T20:15:31.132033Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T20:15:31.138410Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-01T20:15:31.138901Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"c0548effd504e8e0","initial-advertise-peer-urls":["https://192.168.50.159:2380"],"listen-peer-urls":["https://192.168.50.159:2380"],"advertise-client-urls":["https://192.168.50.159:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.159:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-01T20:15:31.140291Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.159:2380"}
	{"level":"info","ts":"2024-10-01T20:15:31.140322Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.159:2380"}
	{"level":"info","ts":"2024-10-01T20:15:31.143615Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> kernel <==
	 20:16:16 up 1 min,  0 users,  load average: 1.75, 0.66, 0.24
	Linux kubernetes-upgrade-869396 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2ffeffc09e8c235e12960cf69d1d43770de2bfee43391240f3856aba8c54ea19] <==
	I1001 20:15:48.081861       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1001 20:15:48.536876       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:15:48.536954       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1001 20:15:48.537000       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1001 20:15:48.547886       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1001 20:15:48.554460       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1001 20:15:48.557949       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1001 20:15:48.558279       1 instance.go:232] Using reconciler: lease
	W1001 20:15:48.560833       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:15:49.538091       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:15:49.538094       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:15:49.562208       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:15:50.894480       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:15:51.197625       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:15:51.257995       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:15:53.350536       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:15:53.862422       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:15:54.057458       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:15:57.667316       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:15:58.067198       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:15:58.293334       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:16:04.060289       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:16:04.357350       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:16:05.100885       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1001 20:16:08.559619       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [b421a7c0bf4ad205a39fd629ad99911f3614121e5814cf186563bcc932f03b6f] <==
	I1001 20:16:12.492930       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1001 20:16:12.493554       1 shared_informer.go:320] Caches are synced for configmaps
	I1001 20:16:12.493655       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1001 20:16:12.493691       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1001 20:16:12.493912       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1001 20:16:12.494391       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1001 20:16:12.505008       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1001 20:16:12.505101       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1001 20:16:12.506368       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1001 20:16:12.510687       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1001 20:16:12.510747       1 policy_source.go:224] refreshing policies
	I1001 20:16:12.514086       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1001 20:16:12.514374       1 aggregator.go:171] initial CRD sync complete...
	I1001 20:16:12.514397       1 autoregister_controller.go:144] Starting autoregister controller
	I1001 20:16:12.514404       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1001 20:16:12.514410       1 cache.go:39] Caches are synced for autoregister controller
	I1001 20:16:12.592895       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1001 20:16:13.431409       1 controller.go:615] quota admission added evaluator for: endpoints
	I1001 20:16:13.431666       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1001 20:16:14.010108       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1001 20:16:14.021059       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1001 20:16:14.059510       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1001 20:16:14.196705       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1001 20:16:14.205184       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1001 20:16:16.133500       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [55e672ef7211761f50e1a6a7c3d99c934286c3c4b9266e3e18297cfa20f96263] <==
	
	
	==> kube-controller-manager [e3c9171b7f213b9321e885e71f864651d991c3eb8cf0c7d9fc41dffb8168c1a0] <==
	I1001 20:16:15.782559       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1001 20:16:15.782573       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1001 20:16:15.783858       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1001 20:16:15.785054       1 shared_informer.go:320] Caches are synced for node
	I1001 20:16:15.785131       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1001 20:16:15.785149       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1001 20:16:15.785153       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1001 20:16:15.785157       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1001 20:16:15.785220       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-869396"
	I1001 20:16:15.785058       1 shared_informer.go:320] Caches are synced for deployment
	I1001 20:16:15.790514       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1001 20:16:15.792730       1 shared_informer.go:320] Caches are synced for daemon sets
	I1001 20:16:15.832490       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1001 20:16:15.837908       1 shared_informer.go:320] Caches are synced for namespace
	I1001 20:16:15.844243       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1001 20:16:15.855520       1 shared_informer.go:320] Caches are synced for service account
	I1001 20:16:15.880170       1 shared_informer.go:320] Caches are synced for disruption
	I1001 20:16:15.888065       1 shared_informer.go:320] Caches are synced for resource quota
	I1001 20:16:15.890292       1 shared_informer.go:320] Caches are synced for resource quota
	I1001 20:16:15.986230       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1001 20:16:15.993535       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="284.203267ms"
	I1001 20:16:15.993846       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="96.713µs"
	I1001 20:16:16.422089       1 shared_informer.go:320] Caches are synced for garbage collector
	I1001 20:16:16.470830       1 shared_informer.go:320] Caches are synced for garbage collector
	I1001 20:16:16.470905       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [b00af3fcecb8aa1dc1a66947db4b8e451615b86befdedb2df8ae7e9ea12e36fc] <==
	
	
	==> kube-proxy [d3d149c8443ff355a426f5ee2228c0cbcf8fb44f81dfe0d3fce96416c90eaf36] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 20:16:13.529329       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 20:16:13.546388       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.159"]
	E1001 20:16:13.546957       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 20:16:13.590861       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 20:16:13.590891       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 20:16:13.590913       1 server_linux.go:169] "Using iptables Proxier"
	I1001 20:16:13.596836       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 20:16:13.597672       1 server.go:483] "Version info" version="v1.31.1"
	I1001 20:16:13.597703       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 20:16:13.600259       1 config.go:199] "Starting service config controller"
	I1001 20:16:13.600557       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 20:16:13.600599       1 config.go:105] "Starting endpoint slice config controller"
	I1001 20:16:13.600605       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 20:16:13.601593       1 config.go:328] "Starting node config controller"
	I1001 20:16:13.601618       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 20:16:13.701826       1 shared_informer.go:320] Caches are synced for service config
	I1001 20:16:13.701922       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 20:16:13.701705       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7b4f5c956a447ed2ca8be8f604f4f98d3ef05e190b6b82d58a377fb76c2de98b] <==
	
	
	==> kube-scheduler [d506c39d1c17b1b4ca135d859b3747277e897ab10da2bbc138a4431c21e64f32] <==
	I1001 20:16:10.422664       1 serving.go:386] Generated self-signed cert in-memory
	W1001 20:16:12.430289       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1001 20:16:12.430332       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1001 20:16:12.430342       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1001 20:16:12.430354       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1001 20:16:12.509921       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1001 20:16:12.509973       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 20:16:12.512398       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 20:16:12.512499       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1001 20:16:12.512645       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1001 20:16:12.512729       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1001 20:16:12.613500       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 01 20:16:09 kubernetes-upgrade-869396 kubelet[3743]: I1001 20:16:09.775560    3743 scope.go:117] "RemoveContainer" containerID="c88306b7aaad8c10b81810a08c4cb346304ae878e859d3bbaaa40b7bcfcc47f1"
	Oct 01 20:16:09 kubernetes-upgrade-869396 kubelet[3743]: I1001 20:16:09.777441    3743 scope.go:117] "RemoveContainer" containerID="55e672ef7211761f50e1a6a7c3d99c934286c3c4b9266e3e18297cfa20f96263"
	Oct 01 20:16:09 kubernetes-upgrade-869396 kubelet[3743]: I1001 20:16:09.778913    3743 scope.go:117] "RemoveContainer" containerID="7b4f5c956a447ed2ca8be8f604f4f98d3ef05e190b6b82d58a377fb76c2de98b"
	Oct 01 20:16:09 kubernetes-upgrade-869396 kubelet[3743]: E1001 20:16:09.898669    3743 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727813769898400743,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:16:09 kubernetes-upgrade-869396 kubelet[3743]: E1001 20:16:09.898703    3743 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727813769898400743,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:16:09 kubernetes-upgrade-869396 kubelet[3743]: E1001 20:16:09.967959    3743 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-869396?timeout=10s\": dial tcp 192.168.50.159:8443: connect: connection refused" interval="800ms"
	Oct 01 20:16:10 kubernetes-upgrade-869396 kubelet[3743]: I1001 20:16:10.049151    3743 scope.go:117] "RemoveContainer" containerID="2ffeffc09e8c235e12960cf69d1d43770de2bfee43391240f3856aba8c54ea19"
	Oct 01 20:16:11 kubernetes-upgrade-869396 kubelet[3743]: I1001 20:16:11.373150    3743 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-869396"
	Oct 01 20:16:12 kubernetes-upgrade-869396 kubelet[3743]: I1001 20:16:12.537240    3743 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-869396"
	Oct 01 20:16:12 kubernetes-upgrade-869396 kubelet[3743]: I1001 20:16:12.537635    3743 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-869396"
	Oct 01 20:16:12 kubernetes-upgrade-869396 kubelet[3743]: E1001 20:16:12.537697    3743 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-869396\": node \"kubernetes-upgrade-869396\" not found"
	Oct 01 20:16:12 kubernetes-upgrade-869396 kubelet[3743]: I1001 20:16:12.539663    3743 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 01 20:16:12 kubernetes-upgrade-869396 kubelet[3743]: I1001 20:16:12.540712    3743 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 01 20:16:12 kubernetes-upgrade-869396 kubelet[3743]: E1001 20:16:12.552087    3743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"kubernetes-upgrade-869396\" not found"
	Oct 01 20:16:12 kubernetes-upgrade-869396 kubelet[3743]: E1001 20:16:12.652852    3743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"kubernetes-upgrade-869396\" not found"
	Oct 01 20:16:12 kubernetes-upgrade-869396 kubelet[3743]: I1001 20:16:12.816563    3743 apiserver.go:52] "Watching apiserver"
	Oct 01 20:16:12 kubernetes-upgrade-869396 kubelet[3743]: I1001 20:16:12.911410    3743 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 01 20:16:12 kubernetes-upgrade-869396 kubelet[3743]: I1001 20:16:12.964740    3743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd09e102-100b-41d5-b33b-f97add3713b5-xtables-lock\") pod \"kube-proxy-j9q29\" (UID: \"dd09e102-100b-41d5-b33b-f97add3713b5\") " pod="kube-system/kube-proxy-j9q29"
	Oct 01 20:16:12 kubernetes-upgrade-869396 kubelet[3743]: I1001 20:16:12.964947    3743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/839a5952-f990-4e0e-988d-1f4028f108ea-tmp\") pod \"storage-provisioner\" (UID: \"839a5952-f990-4e0e-988d-1f4028f108ea\") " pod="kube-system/storage-provisioner"
	Oct 01 20:16:12 kubernetes-upgrade-869396 kubelet[3743]: I1001 20:16:12.965098    3743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd09e102-100b-41d5-b33b-f97add3713b5-lib-modules\") pod \"kube-proxy-j9q29\" (UID: \"dd09e102-100b-41d5-b33b-f97add3713b5\") " pod="kube-system/kube-proxy-j9q29"
	Oct 01 20:16:13 kubernetes-upgrade-869396 kubelet[3743]: E1001 20:16:13.086689    3743 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-869396\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-869396"
	Oct 01 20:16:13 kubernetes-upgrade-869396 kubelet[3743]: I1001 20:16:13.121200    3743 scope.go:117] "RemoveContainer" containerID="d31d03f256e2c54d5b692c5193b717e8dc4bad021b1235776e66e0385fce0cfb"
	Oct 01 20:16:13 kubernetes-upgrade-869396 kubelet[3743]: I1001 20:16:13.121512    3743 scope.go:117] "RemoveContainer" containerID="4821392c6c84de53d6e19912721fbcbbd528e7d3e6bc3fbc44671127d76b48d2"
	Oct 01 20:16:13 kubernetes-upgrade-869396 kubelet[3743]: I1001 20:16:13.121841    3743 scope.go:117] "RemoveContainer" containerID="d27a359c056f03f4de1853b931fea7af34ebe8e4ab437025a9ff46c5b5c9e713"
	Oct 01 20:16:13 kubernetes-upgrade-869396 kubelet[3743]: I1001 20:16:13.121993    3743 scope.go:117] "RemoveContainer" containerID="b00af3fcecb8aa1dc1a66947db4b8e451615b86befdedb2df8ae7e9ea12e36fc"
	
	
	==> storage-provisioner [bac588eb7ec7021824b68efd69694cdd9761eb019e9e0a525e0a7fce566ae336] <==
	I1001 20:16:13.359992       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 20:16:13.399253       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 20:16:13.399318       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 20:16:13.436360       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 20:16:13.436525       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-869396_aec4a893-89c9-4372-a61a-eded8303bb3e!
	I1001 20:16:13.436582       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"59d690a7-f8fd-4e99-8f92-d30de3d14995", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-869396_aec4a893-89c9-4372-a61a-eded8303bb3e became leader
	I1001 20:16:13.536963       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-869396_aec4a893-89c9-4372-a61a-eded8303bb3e!
	
	
	==> storage-provisioner [d27a359c056f03f4de1853b931fea7af34ebe8e4ab437025a9ff46c5b5c9e713] <==
	I1001 20:15:47.013574       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1001 20:15:47.015037       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-869396 -n kubernetes-upgrade-869396
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-869396 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-869396" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-869396
--- FAIL: TestKubernetesUpgrade (403.77s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (59.51s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-170137 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-170137 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (54.967812509s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-170137] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-170137" primary control-plane node in "pause-170137" cluster
	* Updating the running kvm2 "pause-170137" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-170137" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 20:07:04.424955   55919 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:07:04.425255   55919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:07:04.425265   55919 out.go:358] Setting ErrFile to fd 2...
	I1001 20:07:04.425270   55919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:07:04.425562   55919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 20:07:04.426153   55919 out.go:352] Setting JSON to false
	I1001 20:07:04.427255   55919 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6566,"bootTime":1727806658,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 20:07:04.427324   55919 start.go:139] virtualization: kvm guest
	I1001 20:07:04.429506   55919 out.go:177] * [pause-170137] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 20:07:04.430911   55919 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 20:07:04.430923   55919 notify.go:220] Checking for updates...
	I1001 20:07:04.433102   55919 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 20:07:04.435126   55919 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:07:04.436381   55919 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:07:04.437657   55919 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 20:07:04.438892   55919 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 20:07:04.440660   55919 config.go:182] Loaded profile config "pause-170137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:07:04.441262   55919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:07:04.441346   55919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:07:04.457290   55919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I1001 20:07:04.457994   55919 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:07:04.458788   55919 main.go:141] libmachine: Using API Version  1
	I1001 20:07:04.458817   55919 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:07:04.459352   55919 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:07:04.459579   55919 main.go:141] libmachine: (pause-170137) Calling .DriverName
	I1001 20:07:04.459891   55919 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 20:07:04.460383   55919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:07:04.460442   55919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:07:04.477209   55919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44029
	I1001 20:07:04.477643   55919 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:07:04.478304   55919 main.go:141] libmachine: Using API Version  1
	I1001 20:07:04.478353   55919 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:07:04.478771   55919 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:07:04.478966   55919 main.go:141] libmachine: (pause-170137) Calling .DriverName
	I1001 20:07:04.521499   55919 out.go:177] * Using the kvm2 driver based on existing profile
	I1001 20:07:04.522652   55919 start.go:297] selected driver: kvm2
	I1001 20:07:04.522676   55919 start.go:901] validating driver "kvm2" against &{Name:pause-170137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:pause-170137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.12 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:07:04.522827   55919 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 20:07:04.523135   55919 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:07:04.523197   55919 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 20:07:04.540148   55919 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 20:07:04.541367   55919 cni.go:84] Creating CNI manager for ""
	I1001 20:07:04.541435   55919 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:07:04.541523   55919 start.go:340] cluster config:
	{Name:pause-170137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-170137 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.12 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-ali
ases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:07:04.541711   55919 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:07:04.543361   55919 out.go:177] * Starting "pause-170137" primary control-plane node in "pause-170137" cluster
	I1001 20:07:04.544458   55919 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 20:07:04.544508   55919 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 20:07:04.544524   55919 cache.go:56] Caching tarball of preloaded images
	I1001 20:07:04.544639   55919 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 20:07:04.544655   55919 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 20:07:04.544826   55919 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/pause-170137/config.json ...
	I1001 20:07:04.545134   55919 start.go:360] acquireMachinesLock for pause-170137: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 20:07:19.860771   55919 start.go:364] duration metric: took 15.315597441s to acquireMachinesLock for "pause-170137"
	I1001 20:07:19.860835   55919 start.go:96] Skipping create...Using existing machine configuration
	I1001 20:07:19.860846   55919 fix.go:54] fixHost starting: 
	I1001 20:07:19.861279   55919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:07:19.861331   55919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:07:19.878864   55919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45363
	I1001 20:07:19.879403   55919 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:07:19.879921   55919 main.go:141] libmachine: Using API Version  1
	I1001 20:07:19.879947   55919 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:07:19.880288   55919 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:07:19.880490   55919 main.go:141] libmachine: (pause-170137) Calling .DriverName
	I1001 20:07:19.880636   55919 main.go:141] libmachine: (pause-170137) Calling .GetState
	I1001 20:07:19.882418   55919 fix.go:112] recreateIfNeeded on pause-170137: state=Running err=<nil>
	W1001 20:07:19.882454   55919 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 20:07:19.884072   55919 out.go:177] * Updating the running kvm2 "pause-170137" VM ...
	I1001 20:07:19.885138   55919 machine.go:93] provisionDockerMachine start ...
	I1001 20:07:19.885164   55919 main.go:141] libmachine: (pause-170137) Calling .DriverName
	I1001 20:07:19.885375   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHHostname
	I1001 20:07:19.888242   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:19.888713   55919 main.go:141] libmachine: (pause-170137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:fc:02", ip: ""} in network mk-pause-170137: {Iface:virbr2 ExpiryTime:2024-10-01 21:06:22 +0000 UTC Type:0 Mac:52:54:00:7a:fc:02 Iaid: IPaddr:192.168.50.12 Prefix:24 Hostname:pause-170137 Clientid:01:52:54:00:7a:fc:02}
	I1001 20:07:19.888733   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined IP address 192.168.50.12 and MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:19.888887   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHPort
	I1001 20:07:19.889070   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHKeyPath
	I1001 20:07:19.889257   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHKeyPath
	I1001 20:07:19.889407   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHUsername
	I1001 20:07:19.889581   55919 main.go:141] libmachine: Using SSH client type: native
	I1001 20:07:19.889790   55919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.12 22 <nil> <nil>}
	I1001 20:07:19.889805   55919 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 20:07:20.009134   55919 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-170137
	
	I1001 20:07:20.009171   55919 main.go:141] libmachine: (pause-170137) Calling .GetMachineName
	I1001 20:07:20.009473   55919 buildroot.go:166] provisioning hostname "pause-170137"
	I1001 20:07:20.009513   55919 main.go:141] libmachine: (pause-170137) Calling .GetMachineName
	I1001 20:07:20.009682   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHHostname
	I1001 20:07:20.012613   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:20.013152   55919 main.go:141] libmachine: (pause-170137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:fc:02", ip: ""} in network mk-pause-170137: {Iface:virbr2 ExpiryTime:2024-10-01 21:06:22 +0000 UTC Type:0 Mac:52:54:00:7a:fc:02 Iaid: IPaddr:192.168.50.12 Prefix:24 Hostname:pause-170137 Clientid:01:52:54:00:7a:fc:02}
	I1001 20:07:20.013191   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined IP address 192.168.50.12 and MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:20.013434   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHPort
	I1001 20:07:20.013653   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHKeyPath
	I1001 20:07:20.013845   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHKeyPath
	I1001 20:07:20.014062   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHUsername
	I1001 20:07:20.014262   55919 main.go:141] libmachine: Using SSH client type: native
	I1001 20:07:20.014479   55919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.12 22 <nil> <nil>}
	I1001 20:07:20.014491   55919 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-170137 && echo "pause-170137" | sudo tee /etc/hostname
	I1001 20:07:20.138790   55919 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-170137
	
	I1001 20:07:20.138828   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHHostname
	I1001 20:07:20.141660   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:20.142075   55919 main.go:141] libmachine: (pause-170137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:fc:02", ip: ""} in network mk-pause-170137: {Iface:virbr2 ExpiryTime:2024-10-01 21:06:22 +0000 UTC Type:0 Mac:52:54:00:7a:fc:02 Iaid: IPaddr:192.168.50.12 Prefix:24 Hostname:pause-170137 Clientid:01:52:54:00:7a:fc:02}
	I1001 20:07:20.142111   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined IP address 192.168.50.12 and MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:20.142284   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHPort
	I1001 20:07:20.142470   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHKeyPath
	I1001 20:07:20.142622   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHKeyPath
	I1001 20:07:20.142784   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHUsername
	I1001 20:07:20.142970   55919 main.go:141] libmachine: Using SSH client type: native
	I1001 20:07:20.143130   55919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.12 22 <nil> <nil>}
	I1001 20:07:20.143158   55919 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-170137' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-170137/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-170137' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 20:07:20.253172   55919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:07:20.253199   55919 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 20:07:20.253217   55919 buildroot.go:174] setting up certificates
	I1001 20:07:20.253225   55919 provision.go:84] configureAuth start
	I1001 20:07:20.253232   55919 main.go:141] libmachine: (pause-170137) Calling .GetMachineName
	I1001 20:07:20.253548   55919 main.go:141] libmachine: (pause-170137) Calling .GetIP
	I1001 20:07:20.256463   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:20.256870   55919 main.go:141] libmachine: (pause-170137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:fc:02", ip: ""} in network mk-pause-170137: {Iface:virbr2 ExpiryTime:2024-10-01 21:06:22 +0000 UTC Type:0 Mac:52:54:00:7a:fc:02 Iaid: IPaddr:192.168.50.12 Prefix:24 Hostname:pause-170137 Clientid:01:52:54:00:7a:fc:02}
	I1001 20:07:20.256888   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined IP address 192.168.50.12 and MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:20.257038   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHHostname
	I1001 20:07:20.259745   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:20.260124   55919 main.go:141] libmachine: (pause-170137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:fc:02", ip: ""} in network mk-pause-170137: {Iface:virbr2 ExpiryTime:2024-10-01 21:06:22 +0000 UTC Type:0 Mac:52:54:00:7a:fc:02 Iaid: IPaddr:192.168.50.12 Prefix:24 Hostname:pause-170137 Clientid:01:52:54:00:7a:fc:02}
	I1001 20:07:20.260168   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined IP address 192.168.50.12 and MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:20.260294   55919 provision.go:143] copyHostCerts
	I1001 20:07:20.260382   55919 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 20:07:20.260397   55919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 20:07:20.260462   55919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 20:07:20.260584   55919 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 20:07:20.260594   55919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 20:07:20.260623   55919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 20:07:20.260714   55919 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 20:07:20.260724   55919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 20:07:20.260754   55919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 20:07:20.260864   55919 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.pause-170137 san=[127.0.0.1 192.168.50.12 localhost minikube pause-170137]
	I1001 20:07:20.587223   55919 provision.go:177] copyRemoteCerts
	I1001 20:07:20.587282   55919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 20:07:20.587304   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHHostname
	I1001 20:07:20.590143   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:20.590505   55919 main.go:141] libmachine: (pause-170137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:fc:02", ip: ""} in network mk-pause-170137: {Iface:virbr2 ExpiryTime:2024-10-01 21:06:22 +0000 UTC Type:0 Mac:52:54:00:7a:fc:02 Iaid: IPaddr:192.168.50.12 Prefix:24 Hostname:pause-170137 Clientid:01:52:54:00:7a:fc:02}
	I1001 20:07:20.590541   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined IP address 192.168.50.12 and MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:20.590752   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHPort
	I1001 20:07:20.591011   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHKeyPath
	I1001 20:07:20.591164   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHUsername
	I1001 20:07:20.591291   55919 sshutil.go:53] new ssh client: &{IP:192.168.50.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/pause-170137/id_rsa Username:docker}
	I1001 20:07:20.675791   55919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 20:07:20.700600   55919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 20:07:20.728536   55919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 20:07:20.756915   55919 provision.go:87] duration metric: took 503.676441ms to configureAuth
	I1001 20:07:20.756950   55919 buildroot.go:189] setting minikube options for container-runtime
	I1001 20:07:20.757166   55919 config.go:182] Loaded profile config "pause-170137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:07:20.757240   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHHostname
	I1001 20:07:20.759986   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:20.760348   55919 main.go:141] libmachine: (pause-170137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:fc:02", ip: ""} in network mk-pause-170137: {Iface:virbr2 ExpiryTime:2024-10-01 21:06:22 +0000 UTC Type:0 Mac:52:54:00:7a:fc:02 Iaid: IPaddr:192.168.50.12 Prefix:24 Hostname:pause-170137 Clientid:01:52:54:00:7a:fc:02}
	I1001 20:07:20.760393   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined IP address 192.168.50.12 and MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:20.760610   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHPort
	I1001 20:07:20.760826   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHKeyPath
	I1001 20:07:20.761000   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHKeyPath
	I1001 20:07:20.761113   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHUsername
	I1001 20:07:20.761282   55919 main.go:141] libmachine: Using SSH client type: native
	I1001 20:07:20.761486   55919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.12 22 <nil> <nil>}
	I1001 20:07:20.761507   55919 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 20:07:26.334709   55919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 20:07:26.334751   55919 machine.go:96] duration metric: took 6.449597085s to provisionDockerMachine
	I1001 20:07:26.334765   55919 start.go:293] postStartSetup for "pause-170137" (driver="kvm2")
	I1001 20:07:26.334779   55919 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 20:07:26.334819   55919 main.go:141] libmachine: (pause-170137) Calling .DriverName
	I1001 20:07:26.335133   55919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 20:07:26.335165   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHHostname
	I1001 20:07:26.338637   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:26.339101   55919 main.go:141] libmachine: (pause-170137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:fc:02", ip: ""} in network mk-pause-170137: {Iface:virbr2 ExpiryTime:2024-10-01 21:06:22 +0000 UTC Type:0 Mac:52:54:00:7a:fc:02 Iaid: IPaddr:192.168.50.12 Prefix:24 Hostname:pause-170137 Clientid:01:52:54:00:7a:fc:02}
	I1001 20:07:26.339128   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined IP address 192.168.50.12 and MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:26.339349   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHPort
	I1001 20:07:26.339580   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHKeyPath
	I1001 20:07:26.339765   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHUsername
	I1001 20:07:26.339927   55919 sshutil.go:53] new ssh client: &{IP:192.168.50.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/pause-170137/id_rsa Username:docker}
	I1001 20:07:26.431975   55919 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 20:07:26.436173   55919 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 20:07:26.436210   55919 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 20:07:26.436292   55919 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 20:07:26.436442   55919 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 20:07:26.436546   55919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 20:07:26.446876   55919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:07:26.479851   55919 start.go:296] duration metric: took 145.071396ms for postStartSetup
	I1001 20:07:26.479903   55919 fix.go:56] duration metric: took 6.619060548s for fixHost
	I1001 20:07:26.479934   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHHostname
	I1001 20:07:26.482838   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:26.483336   55919 main.go:141] libmachine: (pause-170137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:fc:02", ip: ""} in network mk-pause-170137: {Iface:virbr2 ExpiryTime:2024-10-01 21:06:22 +0000 UTC Type:0 Mac:52:54:00:7a:fc:02 Iaid: IPaddr:192.168.50.12 Prefix:24 Hostname:pause-170137 Clientid:01:52:54:00:7a:fc:02}
	I1001 20:07:26.483369   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined IP address 192.168.50.12 and MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:26.483593   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHPort
	I1001 20:07:26.483797   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHKeyPath
	I1001 20:07:26.483982   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHKeyPath
	I1001 20:07:26.484126   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHUsername
	I1001 20:07:26.484276   55919 main.go:141] libmachine: Using SSH client type: native
	I1001 20:07:26.484528   55919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.12 22 <nil> <nil>}
	I1001 20:07:26.484547   55919 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 20:07:26.605438   55919 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727813246.600478974
	
	I1001 20:07:26.605457   55919 fix.go:216] guest clock: 1727813246.600478974
	I1001 20:07:26.605465   55919 fix.go:229] Guest: 2024-10-01 20:07:26.600478974 +0000 UTC Remote: 2024-10-01 20:07:26.479909602 +0000 UTC m=+22.096499890 (delta=120.569372ms)
	I1001 20:07:26.605501   55919 fix.go:200] guest clock delta is within tolerance: 120.569372ms
	I1001 20:07:26.605506   55919 start.go:83] releasing machines lock for "pause-170137", held for 6.744692212s
	I1001 20:07:26.605531   55919 main.go:141] libmachine: (pause-170137) Calling .DriverName
	I1001 20:07:26.605828   55919 main.go:141] libmachine: (pause-170137) Calling .GetIP
	I1001 20:07:26.609078   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:26.609511   55919 main.go:141] libmachine: (pause-170137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:fc:02", ip: ""} in network mk-pause-170137: {Iface:virbr2 ExpiryTime:2024-10-01 21:06:22 +0000 UTC Type:0 Mac:52:54:00:7a:fc:02 Iaid: IPaddr:192.168.50.12 Prefix:24 Hostname:pause-170137 Clientid:01:52:54:00:7a:fc:02}
	I1001 20:07:26.609542   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined IP address 192.168.50.12 and MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:26.609749   55919 main.go:141] libmachine: (pause-170137) Calling .DriverName
	I1001 20:07:26.610467   55919 main.go:141] libmachine: (pause-170137) Calling .DriverName
	I1001 20:07:26.610695   55919 main.go:141] libmachine: (pause-170137) Calling .DriverName
	I1001 20:07:26.610782   55919 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 20:07:26.610827   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHHostname
	I1001 20:07:26.610943   55919 ssh_runner.go:195] Run: cat /version.json
	I1001 20:07:26.610974   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHHostname
	I1001 20:07:26.614776   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:26.614806   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:26.615858   55919 main.go:141] libmachine: (pause-170137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:fc:02", ip: ""} in network mk-pause-170137: {Iface:virbr2 ExpiryTime:2024-10-01 21:06:22 +0000 UTC Type:0 Mac:52:54:00:7a:fc:02 Iaid: IPaddr:192.168.50.12 Prefix:24 Hostname:pause-170137 Clientid:01:52:54:00:7a:fc:02}
	I1001 20:07:26.615943   55919 main.go:141] libmachine: (pause-170137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:fc:02", ip: ""} in network mk-pause-170137: {Iface:virbr2 ExpiryTime:2024-10-01 21:06:22 +0000 UTC Type:0 Mac:52:54:00:7a:fc:02 Iaid: IPaddr:192.168.50.12 Prefix:24 Hostname:pause-170137 Clientid:01:52:54:00:7a:fc:02}
	I1001 20:07:26.615985   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined IP address 192.168.50.12 and MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:26.615999   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined IP address 192.168.50.12 and MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:26.616149   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHPort
	I1001 20:07:26.616223   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHPort
	I1001 20:07:26.616413   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHKeyPath
	I1001 20:07:26.616448   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHKeyPath
	I1001 20:07:26.616665   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHUsername
	I1001 20:07:26.616678   55919 main.go:141] libmachine: (pause-170137) Calling .GetSSHUsername
	I1001 20:07:26.616818   55919 sshutil.go:53] new ssh client: &{IP:192.168.50.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/pause-170137/id_rsa Username:docker}
	I1001 20:07:26.616818   55919 sshutil.go:53] new ssh client: &{IP:192.168.50.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/pause-170137/id_rsa Username:docker}
	I1001 20:07:26.710735   55919 ssh_runner.go:195] Run: systemctl --version
	I1001 20:07:26.747029   55919 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 20:07:26.927826   55919 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 20:07:26.936422   55919 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 20:07:26.936504   55919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 20:07:26.948207   55919 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1001 20:07:26.948239   55919 start.go:495] detecting cgroup driver to use...
	I1001 20:07:26.948324   55919 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 20:07:26.977857   55919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 20:07:26.992229   55919 docker.go:217] disabling cri-docker service (if available) ...
	I1001 20:07:26.992297   55919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 20:07:27.011398   55919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 20:07:27.029531   55919 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 20:07:27.201120   55919 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 20:07:27.373725   55919 docker.go:233] disabling docker service ...
	I1001 20:07:27.373804   55919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 20:07:27.394362   55919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 20:07:27.410307   55919 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 20:07:27.597190   55919 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 20:07:27.775895   55919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 20:07:27.794828   55919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 20:07:27.821786   55919 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 20:07:27.821862   55919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:07:27.837561   55919 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 20:07:27.837631   55919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:07:27.851376   55919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:07:27.868558   55919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:07:27.883020   55919 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 20:07:27.895842   55919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:07:27.913218   55919 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:07:27.927826   55919 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:07:27.942367   55919 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 20:07:27.954932   55919 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 20:07:27.966397   55919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:07:28.209129   55919 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 20:07:28.802797   55919 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 20:07:28.802888   55919 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 20:07:28.807973   55919 start.go:563] Will wait 60s for crictl version
	I1001 20:07:28.808037   55919 ssh_runner.go:195] Run: which crictl
	I1001 20:07:28.812450   55919 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 20:07:28.858046   55919 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 20:07:28.858150   55919 ssh_runner.go:195] Run: crio --version
	I1001 20:07:28.893827   55919 ssh_runner.go:195] Run: crio --version
	I1001 20:07:28.926928   55919 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 20:07:28.928299   55919 main.go:141] libmachine: (pause-170137) Calling .GetIP
	I1001 20:07:28.931791   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:28.932257   55919 main.go:141] libmachine: (pause-170137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:fc:02", ip: ""} in network mk-pause-170137: {Iface:virbr2 ExpiryTime:2024-10-01 21:06:22 +0000 UTC Type:0 Mac:52:54:00:7a:fc:02 Iaid: IPaddr:192.168.50.12 Prefix:24 Hostname:pause-170137 Clientid:01:52:54:00:7a:fc:02}
	I1001 20:07:28.932281   55919 main.go:141] libmachine: (pause-170137) DBG | domain pause-170137 has defined IP address 192.168.50.12 and MAC address 52:54:00:7a:fc:02 in network mk-pause-170137
	I1001 20:07:28.932587   55919 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1001 20:07:28.937234   55919 kubeadm.go:883] updating cluster {Name:pause-170137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:pause-170137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.12 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-se
curity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 20:07:28.937382   55919 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 20:07:28.937441   55919 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:07:28.995234   55919 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 20:07:28.995257   55919 crio.go:433] Images already preloaded, skipping extraction
	I1001 20:07:28.995311   55919 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:07:29.033317   55919 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 20:07:29.033353   55919 cache_images.go:84] Images are preloaded, skipping loading
	I1001 20:07:29.033364   55919 kubeadm.go:934] updating node { 192.168.50.12 8443 v1.31.1 crio true true} ...
	I1001 20:07:29.033510   55919 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-170137 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-170137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 20:07:29.033608   55919 ssh_runner.go:195] Run: crio config
	I1001 20:07:29.089984   55919 cni.go:84] Creating CNI manager for ""
	I1001 20:07:29.090011   55919 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:07:29.090025   55919 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 20:07:29.090052   55919 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.12 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-170137 NodeName:pause-170137 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 20:07:29.090239   55919 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-170137"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 20:07:29.090318   55919 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 20:07:29.103526   55919 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 20:07:29.103593   55919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 20:07:29.114071   55919 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1001 20:07:29.132132   55919 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 20:07:29.150941   55919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1001 20:07:29.172318   55919 ssh_runner.go:195] Run: grep 192.168.50.12	control-plane.minikube.internal$ /etc/hosts
	I1001 20:07:29.177557   55919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:07:29.359541   55919 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:07:29.384491   55919 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/pause-170137 for IP: 192.168.50.12
	I1001 20:07:29.384577   55919 certs.go:194] generating shared ca certs ...
	I1001 20:07:29.384605   55919 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:07:29.384788   55919 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 20:07:29.384856   55919 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 20:07:29.384870   55919 certs.go:256] generating profile certs ...
	I1001 20:07:29.384978   55919 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/pause-170137/client.key
	I1001 20:07:29.385054   55919 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/pause-170137/apiserver.key.c413db8c
	I1001 20:07:29.385112   55919 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/pause-170137/proxy-client.key
	I1001 20:07:29.385265   55919 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 20:07:29.385295   55919 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 20:07:29.385304   55919 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 20:07:29.385334   55919 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 20:07:29.385367   55919 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 20:07:29.385398   55919 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 20:07:29.385457   55919 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:07:29.386266   55919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 20:07:29.417814   55919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 20:07:29.448485   55919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 20:07:29.520185   55919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 20:07:29.559592   55919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/pause-170137/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1001 20:07:29.595578   55919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/pause-170137/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 20:07:29.640144   55919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/pause-170137/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 20:07:29.715812   55919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/pause-170137/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1001 20:07:29.771465   55919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 20:07:29.806605   55919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 20:07:29.839820   55919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 20:07:29.873801   55919 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 20:07:29.925699   55919 ssh_runner.go:195] Run: openssl version
	I1001 20:07:29.933551   55919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 20:07:29.946608   55919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 20:07:29.956172   55919 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 20:07:29.956233   55919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 20:07:29.974903   55919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 20:07:29.991453   55919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 20:07:30.004982   55919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 20:07:30.010115   55919 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 20:07:30.010198   55919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 20:07:30.018819   55919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 20:07:30.032632   55919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 20:07:30.046661   55919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:07:30.054093   55919 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:07:30.054206   55919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:07:30.060680   55919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 20:07:30.072152   55919 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 20:07:30.095774   55919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 20:07:30.129718   55919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 20:07:30.137523   55919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 20:07:30.150149   55919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 20:07:30.166513   55919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 20:07:30.177429   55919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 20:07:30.188485   55919 kubeadm.go:392] StartCluster: {Name:pause-170137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:pause-170137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.12 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secur
ity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:07:30.188677   55919 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 20:07:30.188737   55919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 20:07:30.269288   55919 cri.go:89] found id: "2e0d4a72c290677d3b877779ce453b0e69fa0015617db7f645ad4688395ab996"
	I1001 20:07:30.269313   55919 cri.go:89] found id: "1b218656faf8165138d3e921d3db2d506d35669a650f07074504daff545fa656"
	I1001 20:07:30.269322   55919 cri.go:89] found id: "667fbb3e5706931d62ec8064b0e3050f2b38f94dcd31cdef183965f0e1b01717"
	I1001 20:07:30.269329   55919 cri.go:89] found id: "2c5d5a47e7ec478095de41e27338605f0f1dc7fde167537cae026dabaae47a9f"
	I1001 20:07:30.269333   55919 cri.go:89] found id: "6b60a5977fe573029539badfa21465c04853167aa7eec89399606bf02e954db7"
	I1001 20:07:30.269340   55919 cri.go:89] found id: "1aec529783dd207e00f19b8051af6c3aa88dd44bff82a28f85c62cc908a65439"
	I1001 20:07:30.269345   55919 cri.go:89] found id: "97bce934e0f09a84c734707360b83dc0263004bd1431ad3aae6e7a0b1174e051"
	I1001 20:07:30.269348   55919 cri.go:89] found id: "5b805cecbacbe4c96e118593c8cd517128c4ce4b2a630dbfd99de6fd2dbbda89"
	I1001 20:07:30.269352   55919 cri.go:89] found id: ""
	I1001 20:07:30.269400   55919 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-170137 -n pause-170137
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-170137 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-170137 logs -n 25: (1.556138188s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| image   | test-preload-118977 image list | test-preload-118977      | jenkins | v1.34.0 | 01 Oct 24 20:03 UTC | 01 Oct 24 20:03 UTC |
	| delete  | -p test-preload-118977         | test-preload-118977      | jenkins | v1.34.0 | 01 Oct 24 20:03 UTC | 01 Oct 24 20:03 UTC |
	| start   | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:03 UTC | 01 Oct 24 20:04 UTC |
	|         | --memory=2048 --driver=kvm2    |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:04 UTC |                     |
	|         | --schedule 5m                  |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:04 UTC |                     |
	|         | --schedule 5m                  |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:04 UTC |                     |
	|         | --schedule 5m                  |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:04 UTC |                     |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:04 UTC |                     |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:04 UTC |                     |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:04 UTC | 01 Oct 24 20:04 UTC |
	|         | --cancel-scheduled             |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:04 UTC |                     |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:04 UTC |                     |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:04 UTC | 01 Oct 24 20:05 UTC |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| delete  | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:05 UTC | 01 Oct 24 20:05 UTC |
	| start   | -p offline-crio-770413         | offline-crio-770413      | jenkins | v1.34.0 | 01 Oct 24 20:05 UTC | 01 Oct 24 20:07 UTC |
	|         | --alsologtostderr              |                          |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                          |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p pause-170137 --memory=2048  | pause-170137             | jenkins | v1.34.0 | 01 Oct 24 20:05 UTC | 01 Oct 24 20:07 UTC |
	|         | --install-addons=false         |                          |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p NoKubernetes-791490         | NoKubernetes-791490      | jenkins | v1.34.0 | 01 Oct 24 20:05 UTC |                     |
	|         | --no-kubernetes                |                          |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                          |         |         |                     |                     |
	|         | --driver=kvm2                  |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p NoKubernetes-791490         | NoKubernetes-791490      | jenkins | v1.34.0 | 01 Oct 24 20:05 UTC | 01 Oct 24 20:07 UTC |
	|         | --driver=kvm2                  |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p running-upgrade-819936      | minikube                 | jenkins | v1.26.0 | 01 Oct 24 20:05 UTC | 01 Oct 24 20:07 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                          |         |         |                     |                     |
	|         |  --container-runtime=crio      |                          |         |         |                     |                     |
	| start   | -p pause-170137                | pause-170137             | jenkins | v1.34.0 | 01 Oct 24 20:07 UTC | 01 Oct 24 20:07 UTC |
	|         | --alsologtostderr              |                          |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| delete  | -p offline-crio-770413         | offline-crio-770413      | jenkins | v1.34.0 | 01 Oct 24 20:07 UTC | 01 Oct 24 20:07 UTC |
	| start   | -p force-systemd-env-528861    | force-systemd-env-528861 | jenkins | v1.34.0 | 01 Oct 24 20:07 UTC |                     |
	|         | --memory=2048                  |                          |         |         |                     |                     |
	|         | --alsologtostderr              |                          |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p NoKubernetes-791490         | NoKubernetes-791490      | jenkins | v1.34.0 | 01 Oct 24 20:07 UTC | 01 Oct 24 20:07 UTC |
	|         | --no-kubernetes --driver=kvm2  |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p running-upgrade-819936      | running-upgrade-819936   | jenkins | v1.34.0 | 01 Oct 24 20:07 UTC |                     |
	|         | --memory=2200                  |                          |         |         |                     |                     |
	|         | --alsologtostderr              |                          |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| delete  | -p NoKubernetes-791490         | NoKubernetes-791490      | jenkins | v1.34.0 | 01 Oct 24 20:07 UTC |                     |
	|---------|--------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 20:07:48
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 20:07:48.384813   56571 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:07:48.384945   56571 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:07:48.384956   56571 out.go:358] Setting ErrFile to fd 2...
	I1001 20:07:48.384963   56571 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:07:48.385246   56571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 20:07:48.385979   56571 out.go:352] Setting JSON to false
	I1001 20:07:48.387217   56571 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6610,"bootTime":1727806658,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 20:07:48.387344   56571 start.go:139] virtualization: kvm guest
	I1001 20:07:48.389515   56571 out.go:177] * [running-upgrade-819936] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 20:07:48.390764   56571 notify.go:220] Checking for updates...
	I1001 20:07:48.390783   56571 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 20:07:48.391850   56571 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 20:07:48.392945   56571 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:07:48.393957   56571 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:07:48.394946   56571 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 20:07:48.395951   56571 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 20:07:48.397257   56571 config.go:182] Loaded profile config "running-upgrade-819936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1001 20:07:48.397667   56571 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:07:48.397738   56571 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:07:48.415104   56571 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41137
	I1001 20:07:48.415604   56571 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:07:48.416214   56571 main.go:141] libmachine: Using API Version  1
	I1001 20:07:48.416237   56571 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:07:48.416688   56571 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:07:48.416892   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .DriverName
	I1001 20:07:48.418334   56571 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1001 20:07:48.419359   56571 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 20:07:48.419656   56571 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:07:48.419690   56571 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:07:48.435716   56571 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34517
	I1001 20:07:48.436113   56571 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:07:48.436670   56571 main.go:141] libmachine: Using API Version  1
	I1001 20:07:48.436705   56571 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:07:48.437005   56571 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:07:48.437165   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .DriverName
	I1001 20:07:48.474395   56571 out.go:177] * Using the kvm2 driver based on existing profile
	I1001 20:07:48.475448   56571 start.go:297] selected driver: kvm2
	I1001 20:07:48.475472   56571 start.go:901] validating driver "kvm2" against &{Name:running-upgrade-819936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-819
936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1001 20:07:48.475644   56571 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 20:07:48.476453   56571 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:07:48.476534   56571 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 20:07:48.496088   56571 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 20:07:48.496597   56571 cni.go:84] Creating CNI manager for ""
	I1001 20:07:48.496659   56571 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:07:48.496728   56571 start.go:340] cluster config:
	{Name:running-upgrade-819936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-819936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1001 20:07:48.496961   56571 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:07:48.498549   56571 out.go:177] * Starting "running-upgrade-819936" primary control-plane node in "running-upgrade-819936" cluster
	I1001 20:07:48.481278   56202 start.go:364] duration metric: took 28.564457095s to acquireMachinesLock for "NoKubernetes-791490"
	I1001 20:07:48.481311   56202 start.go:96] Skipping create...Using existing machine configuration
	I1001 20:07:48.481317   56202 fix.go:54] fixHost starting: 
	I1001 20:07:48.481743   56202 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:07:48.481823   56202 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:07:48.501636   56202 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43929
	I1001 20:07:48.502138   56202 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:07:48.502718   56202 main.go:141] libmachine: Using API Version  1
	I1001 20:07:48.502735   56202 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:07:48.503078   56202 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:07:48.503305   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .DriverName
	I1001 20:07:48.503465   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetState
	I1001 20:07:48.505252   56202 fix.go:112] recreateIfNeeded on NoKubernetes-791490: state=Running err=<nil>
	W1001 20:07:48.505268   56202 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 20:07:48.506654   56202 out.go:177] * Updating the running kvm2 "NoKubernetes-791490" VM ...
	I1001 20:07:48.507565   56202 machine.go:93] provisionDockerMachine start ...
	I1001 20:07:48.507580   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .DriverName
	I1001 20:07:48.507796   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHHostname
	I1001 20:07:48.510780   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:48.511383   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:3c", ip: ""} in network mk-NoKubernetes-791490: {Iface:virbr3 ExpiryTime:2024-10-01 21:06:48 +0000 UTC Type:0 Mac:52:54:00:cf:81:3c Iaid: IPaddr:192.168.61.118 Prefix:24 Hostname:NoKubernetes-791490 Clientid:01:52:54:00:cf:81:3c}
	I1001 20:07:48.511410   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined IP address 192.168.61.118 and MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:48.511571   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHPort
	I1001 20:07:48.511779   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHKeyPath
	I1001 20:07:48.511919   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHKeyPath
	I1001 20:07:48.512049   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHUsername
	I1001 20:07:48.512237   56202 main.go:141] libmachine: Using SSH client type: native
	I1001 20:07:48.512492   56202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.118 22 <nil> <nil>}
	I1001 20:07:48.512503   56202 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 20:07:48.626251   56202 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-791490
	
	I1001 20:07:48.626276   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetMachineName
	I1001 20:07:48.626534   56202 buildroot.go:166] provisioning hostname "NoKubernetes-791490"
	I1001 20:07:48.626555   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetMachineName
	I1001 20:07:48.626764   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHHostname
	I1001 20:07:48.629841   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:48.630290   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:3c", ip: ""} in network mk-NoKubernetes-791490: {Iface:virbr3 ExpiryTime:2024-10-01 21:06:48 +0000 UTC Type:0 Mac:52:54:00:cf:81:3c Iaid: IPaddr:192.168.61.118 Prefix:24 Hostname:NoKubernetes-791490 Clientid:01:52:54:00:cf:81:3c}
	I1001 20:07:48.630311   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined IP address 192.168.61.118 and MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:48.630527   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHPort
	I1001 20:07:48.630770   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHKeyPath
	I1001 20:07:48.630907   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHKeyPath
	I1001 20:07:48.631080   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHUsername
	I1001 20:07:48.631257   56202 main.go:141] libmachine: Using SSH client type: native
	I1001 20:07:48.631466   56202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.118 22 <nil> <nil>}
	I1001 20:07:48.631476   56202 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-791490 && echo "NoKubernetes-791490" | sudo tee /etc/hostname
	I1001 20:07:48.762854   56202 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-791490
	
	I1001 20:07:48.762876   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHHostname
	I1001 20:07:48.765990   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:48.766416   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:3c", ip: ""} in network mk-NoKubernetes-791490: {Iface:virbr3 ExpiryTime:2024-10-01 21:06:48 +0000 UTC Type:0 Mac:52:54:00:cf:81:3c Iaid: IPaddr:192.168.61.118 Prefix:24 Hostname:NoKubernetes-791490 Clientid:01:52:54:00:cf:81:3c}
	I1001 20:07:48.766443   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined IP address 192.168.61.118 and MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:48.766615   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHPort
	I1001 20:07:48.766818   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHKeyPath
	I1001 20:07:48.766982   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHKeyPath
	I1001 20:07:48.767130   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHUsername
	I1001 20:07:48.767301   56202 main.go:141] libmachine: Using SSH client type: native
	I1001 20:07:48.767469   56202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.118 22 <nil> <nil>}
	I1001 20:07:48.767482   56202 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-791490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-791490/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-791490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 20:07:48.885953   56202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:07:48.885976   56202 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 20:07:48.886017   56202 buildroot.go:174] setting up certificates
	I1001 20:07:48.886026   56202 provision.go:84] configureAuth start
	I1001 20:07:48.886038   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetMachineName
	I1001 20:07:48.886316   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetIP
	I1001 20:07:48.889823   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:48.890357   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:3c", ip: ""} in network mk-NoKubernetes-791490: {Iface:virbr3 ExpiryTime:2024-10-01 21:06:48 +0000 UTC Type:0 Mac:52:54:00:cf:81:3c Iaid: IPaddr:192.168.61.118 Prefix:24 Hostname:NoKubernetes-791490 Clientid:01:52:54:00:cf:81:3c}
	I1001 20:07:48.890390   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined IP address 192.168.61.118 and MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:48.890673   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHHostname
	I1001 20:07:48.893788   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:48.894265   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:3c", ip: ""} in network mk-NoKubernetes-791490: {Iface:virbr3 ExpiryTime:2024-10-01 21:06:48 +0000 UTC Type:0 Mac:52:54:00:cf:81:3c Iaid: IPaddr:192.168.61.118 Prefix:24 Hostname:NoKubernetes-791490 Clientid:01:52:54:00:cf:81:3c}
	I1001 20:07:48.894291   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined IP address 192.168.61.118 and MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:48.894418   56202 provision.go:143] copyHostCerts
	I1001 20:07:48.894474   56202 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 20:07:48.894481   56202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 20:07:48.894563   56202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 20:07:48.894681   56202 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 20:07:48.894685   56202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 20:07:48.894712   56202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 20:07:48.894761   56202 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 20:07:48.894763   56202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 20:07:48.894778   56202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 20:07:48.894833   56202 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-791490 san=[127.0.0.1 192.168.61.118 NoKubernetes-791490 localhost minikube]
	I1001 20:07:49.093281   56202 provision.go:177] copyRemoteCerts
	I1001 20:07:49.093369   56202 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 20:07:49.093395   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHHostname
	I1001 20:07:49.097009   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:49.097530   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:3c", ip: ""} in network mk-NoKubernetes-791490: {Iface:virbr3 ExpiryTime:2024-10-01 21:06:48 +0000 UTC Type:0 Mac:52:54:00:cf:81:3c Iaid: IPaddr:192.168.61.118 Prefix:24 Hostname:NoKubernetes-791490 Clientid:01:52:54:00:cf:81:3c}
	I1001 20:07:49.097554   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined IP address 192.168.61.118 and MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:49.097820   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHPort
	I1001 20:07:49.097993   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHKeyPath
	I1001 20:07:49.098099   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHUsername
	I1001 20:07:49.098277   56202 sshutil.go:53] new ssh client: &{IP:192.168.61.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/NoKubernetes-791490/id_rsa Username:docker}
	I1001 20:07:49.187031   56202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 20:07:49.216896   56202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1001 20:07:49.247499   56202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 20:07:49.272611   56202 provision.go:87] duration metric: took 386.574391ms to configureAuth
	I1001 20:07:49.272631   56202 buildroot.go:189] setting minikube options for container-runtime
	I1001 20:07:49.272838   56202 config.go:182] Loaded profile config "NoKubernetes-791490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1001 20:07:49.272906   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHHostname
	I1001 20:07:49.275898   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:49.276202   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:3c", ip: ""} in network mk-NoKubernetes-791490: {Iface:virbr3 ExpiryTime:2024-10-01 21:06:48 +0000 UTC Type:0 Mac:52:54:00:cf:81:3c Iaid: IPaddr:192.168.61.118 Prefix:24 Hostname:NoKubernetes-791490 Clientid:01:52:54:00:cf:81:3c}
	I1001 20:07:49.276237   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined IP address 192.168.61.118 and MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:49.276435   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHPort
	I1001 20:07:49.276698   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHKeyPath
	I1001 20:07:49.276919   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHKeyPath
	I1001 20:07:49.277097   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHUsername
	I1001 20:07:49.277309   56202 main.go:141] libmachine: Using SSH client type: native
	I1001 20:07:49.277506   56202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.118 22 <nil> <nil>}
	I1001 20:07:49.277738   56202 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 20:07:46.784766   55919 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 20:07:46.797502   55919 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 20:07:46.818790   55919 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:07:46.818864   55919 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1001 20:07:46.818883   55919 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1001 20:07:46.828749   55919 system_pods.go:59] 6 kube-system pods found
	I1001 20:07:46.828784   55919 system_pods.go:61] "coredns-7c65d6cfc9-8tqn8" [b42e5352-5fa7-4a31-97a6-13e95b760487] Running
	I1001 20:07:46.828791   55919 system_pods.go:61] "etcd-pause-170137" [2a159ffc-cdff-49c4-b46e-209ea3d9bc05] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1001 20:07:46.828797   55919 system_pods.go:61] "kube-apiserver-pause-170137" [fd49f289-ef88-47d0-987f-cd35b4bcc962] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 20:07:46.828807   55919 system_pods.go:61] "kube-controller-manager-pause-170137" [0f77180d-0176-41ef-b45c-7d8b7b175e4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1001 20:07:46.828814   55919 system_pods.go:61] "kube-proxy-ffrj7" [9579b36d-adb4-4b12-a1de-b318cb62b8a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1001 20:07:46.828820   55919 system_pods.go:61] "kube-scheduler-pause-170137" [bb44a227-37f4-4c2f-badf-8f9c9a7e49a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 20:07:46.828828   55919 system_pods.go:74] duration metric: took 10.015009ms to wait for pod list to return data ...
	I1001 20:07:46.828842   55919 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:07:46.833198   55919 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:07:46.833229   55919 node_conditions.go:123] node cpu capacity is 2
	I1001 20:07:46.833239   55919 node_conditions.go:105] duration metric: took 4.392219ms to run NodePressure ...
	I1001 20:07:46.833254   55919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:07:47.130147   55919 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1001 20:07:47.136184   55919 kubeadm.go:739] kubelet initialised
	I1001 20:07:47.136205   55919 kubeadm.go:740] duration metric: took 6.032719ms waiting for restarted kubelet to initialise ...
	I1001 20:07:47.136212   55919 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:07:47.142450   55919 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-8tqn8" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:47.154736   55919 pod_ready.go:93] pod "coredns-7c65d6cfc9-8tqn8" in "kube-system" namespace has status "Ready":"True"
	I1001 20:07:47.154762   55919 pod_ready.go:82] duration metric: took 12.283043ms for pod "coredns-7c65d6cfc9-8tqn8" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:47.154775   55919 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-170137" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:48.163678   55919 pod_ready.go:93] pod "etcd-pause-170137" in "kube-system" namespace has status "Ready":"True"
	I1001 20:07:48.163707   55919 pod_ready.go:82] duration metric: took 1.008924483s for pod "etcd-pause-170137" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:48.163717   55919 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-170137" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:46.928094   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:46.928660   56047 main.go:141] libmachine: (force-systemd-env-528861) Found IP for machine: 192.168.39.66
	I1001 20:07:46.928687   56047 main.go:141] libmachine: (force-systemd-env-528861) Reserving static IP address...
	I1001 20:07:46.928702   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has current primary IP address 192.168.39.66 and MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:46.929059   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | unable to find host DHCP lease matching {name: "force-systemd-env-528861", mac: "52:54:00:73:c0:47", ip: "192.168.39.66"} in network mk-force-systemd-env-528861
	I1001 20:07:47.037691   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | Getting to WaitForSSH function...
	I1001 20:07:47.037723   56047 main.go:141] libmachine: (force-systemd-env-528861) Reserved static IP address: 192.168.39.66
	I1001 20:07:47.037732   56047 main.go:141] libmachine: (force-systemd-env-528861) Waiting for SSH to be available...
	I1001 20:07:47.041330   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:47.041688   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:47", ip: ""} in network mk-force-systemd-env-528861: {Iface:virbr1 ExpiryTime:2024-10-01 21:07:41 +0000 UTC Type:0 Mac:52:54:00:73:c0:47 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:minikube Clientid:01:52:54:00:73:c0:47}
	I1001 20:07:47.041714   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined IP address 192.168.39.66 and MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:47.042009   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | Using SSH client type: external
	I1001 20:07:47.042028   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/force-systemd-env-528861/id_rsa (-rw-------)
	I1001 20:07:47.042065   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.66 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/force-systemd-env-528861/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 20:07:47.042075   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | About to run SSH command:
	I1001 20:07:47.042087   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | exit 0
	I1001 20:07:47.164681   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | SSH cmd err, output: <nil>: 
	I1001 20:07:47.164957   56047 main.go:141] libmachine: (force-systemd-env-528861) KVM machine creation complete!
	I1001 20:07:47.165308   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetConfigRaw
	I1001 20:07:47.165970   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .DriverName
	I1001 20:07:47.166255   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .DriverName
	I1001 20:07:47.166433   56047 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 20:07:47.166451   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetState
	I1001 20:07:47.167832   56047 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 20:07:47.167849   56047 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 20:07:47.167856   56047 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 20:07:47.167865   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHHostname
	I1001 20:07:47.170886   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:47.171333   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:47", ip: ""} in network mk-force-systemd-env-528861: {Iface:virbr1 ExpiryTime:2024-10-01 21:07:41 +0000 UTC Type:0 Mac:52:54:00:73:c0:47 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:force-systemd-env-528861 Clientid:01:52:54:00:73:c0:47}
	I1001 20:07:47.171362   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined IP address 192.168.39.66 and MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:47.171610   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHPort
	I1001 20:07:47.171830   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHKeyPath
	I1001 20:07:47.172009   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHKeyPath
	I1001 20:07:47.172204   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHUsername
	I1001 20:07:47.172437   56047 main.go:141] libmachine: Using SSH client type: native
	I1001 20:07:47.172733   56047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I1001 20:07:47.172750   56047 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 20:07:47.271694   56047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:07:47.271719   56047 main.go:141] libmachine: Detecting the provisioner...
	I1001 20:07:47.271728   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHHostname
	I1001 20:07:47.274825   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:47.275217   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:47", ip: ""} in network mk-force-systemd-env-528861: {Iface:virbr1 ExpiryTime:2024-10-01 21:07:41 +0000 UTC Type:0 Mac:52:54:00:73:c0:47 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:force-systemd-env-528861 Clientid:01:52:54:00:73:c0:47}
	I1001 20:07:47.275260   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined IP address 192.168.39.66 and MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:47.275407   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHPort
	I1001 20:07:47.275583   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHKeyPath
	I1001 20:07:47.275764   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHKeyPath
	I1001 20:07:47.275929   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHUsername
	I1001 20:07:47.276121   56047 main.go:141] libmachine: Using SSH client type: native
	I1001 20:07:47.276300   56047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I1001 20:07:47.276314   56047 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 20:07:47.381156   56047 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 20:07:47.381248   56047 main.go:141] libmachine: found compatible host: buildroot
	I1001 20:07:47.381261   56047 main.go:141] libmachine: Provisioning with buildroot...
	I1001 20:07:47.381273   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetMachineName
	I1001 20:07:47.381540   56047 buildroot.go:166] provisioning hostname "force-systemd-env-528861"
	I1001 20:07:47.381568   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetMachineName
	I1001 20:07:47.381753   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHHostname
	I1001 20:07:47.385460   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:47.385843   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:47", ip: ""} in network mk-force-systemd-env-528861: {Iface:virbr1 ExpiryTime:2024-10-01 21:07:41 +0000 UTC Type:0 Mac:52:54:00:73:c0:47 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:force-systemd-env-528861 Clientid:01:52:54:00:73:c0:47}
	I1001 20:07:47.385869   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined IP address 192.168.39.66 and MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:47.386119   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHPort
	I1001 20:07:47.386335   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHKeyPath
	I1001 20:07:47.386515   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHKeyPath
	I1001 20:07:47.386646   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHUsername
	I1001 20:07:47.386835   56047 main.go:141] libmachine: Using SSH client type: native
	I1001 20:07:47.387017   56047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I1001 20:07:47.387034   56047 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-528861 && echo "force-systemd-env-528861" | sudo tee /etc/hostname
	I1001 20:07:47.504737   56047 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-528861
	
	I1001 20:07:47.504772   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHHostname
	I1001 20:07:47.508146   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:47.508591   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:47", ip: ""} in network mk-force-systemd-env-528861: {Iface:virbr1 ExpiryTime:2024-10-01 21:07:41 +0000 UTC Type:0 Mac:52:54:00:73:c0:47 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:force-systemd-env-528861 Clientid:01:52:54:00:73:c0:47}
	I1001 20:07:47.508624   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined IP address 192.168.39.66 and MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:47.508915   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHPort
	I1001 20:07:47.509117   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHKeyPath
	I1001 20:07:47.509318   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHKeyPath
	I1001 20:07:47.509521   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHUsername
	I1001 20:07:47.509706   56047 main.go:141] libmachine: Using SSH client type: native
	I1001 20:07:47.509936   56047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I1001 20:07:47.509965   56047 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-528861' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-528861/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-528861' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 20:07:47.628266   56047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:07:47.628301   56047 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 20:07:47.628385   56047 buildroot.go:174] setting up certificates
	I1001 20:07:47.628417   56047 provision.go:84] configureAuth start
	I1001 20:07:47.628435   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetMachineName
	I1001 20:07:47.628766   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetIP
	I1001 20:07:47.631702   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:47.632109   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:47", ip: ""} in network mk-force-systemd-env-528861: {Iface:virbr1 ExpiryTime:2024-10-01 21:07:41 +0000 UTC Type:0 Mac:52:54:00:73:c0:47 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:force-systemd-env-528861 Clientid:01:52:54:00:73:c0:47}
	I1001 20:07:47.632140   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined IP address 192.168.39.66 and MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:47.632273   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHHostname
	I1001 20:07:47.634897   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:47.635291   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:47", ip: ""} in network mk-force-systemd-env-528861: {Iface:virbr1 ExpiryTime:2024-10-01 21:07:41 +0000 UTC Type:0 Mac:52:54:00:73:c0:47 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:force-systemd-env-528861 Clientid:01:52:54:00:73:c0:47}
	I1001 20:07:47.635317   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined IP address 192.168.39.66 and MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:47.635436   56047 provision.go:143] copyHostCerts
	I1001 20:07:47.635466   56047 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 20:07:47.635530   56047 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 20:07:47.635544   56047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 20:07:47.635606   56047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 20:07:47.635705   56047 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 20:07:47.635740   56047 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 20:07:47.635750   56047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 20:07:47.635785   56047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 20:07:47.635853   56047 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 20:07:47.635877   56047 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 20:07:47.635885   56047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 20:07:47.635914   56047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 20:07:47.635985   56047 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-528861 san=[127.0.0.1 192.168.39.66 force-systemd-env-528861 localhost minikube]
	I1001 20:07:47.806028   56047 provision.go:177] copyRemoteCerts
	I1001 20:07:47.806085   56047 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 20:07:47.806109   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHHostname
	I1001 20:07:47.808799   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:47.809145   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:47", ip: ""} in network mk-force-systemd-env-528861: {Iface:virbr1 ExpiryTime:2024-10-01 21:07:41 +0000 UTC Type:0 Mac:52:54:00:73:c0:47 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:force-systemd-env-528861 Clientid:01:52:54:00:73:c0:47}
	I1001 20:07:47.809174   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined IP address 192.168.39.66 and MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:47.809339   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHPort
	I1001 20:07:47.809524   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHKeyPath
	I1001 20:07:47.809668   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHUsername
	I1001 20:07:47.809788   56047 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/force-systemd-env-528861/id_rsa Username:docker}
	I1001 20:07:47.890901   56047 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 20:07:47.890993   56047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 20:07:47.917572   56047 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 20:07:47.917665   56047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1001 20:07:47.943925   56047 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 20:07:47.944009   56047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 20:07:47.968585   56047 provision.go:87] duration metric: took 340.154088ms to configureAuth
	I1001 20:07:47.968615   56047 buildroot.go:189] setting minikube options for container-runtime
	I1001 20:07:47.968819   56047 config.go:182] Loaded profile config "force-systemd-env-528861": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:07:47.968902   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHHostname
	I1001 20:07:47.971732   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:47.972057   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:47", ip: ""} in network mk-force-systemd-env-528861: {Iface:virbr1 ExpiryTime:2024-10-01 21:07:41 +0000 UTC Type:0 Mac:52:54:00:73:c0:47 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:force-systemd-env-528861 Clientid:01:52:54:00:73:c0:47}
	I1001 20:07:47.972101   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined IP address 192.168.39.66 and MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:47.972304   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHPort
	I1001 20:07:47.972497   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHKeyPath
	I1001 20:07:47.972655   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHKeyPath
	I1001 20:07:47.972778   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHUsername
	I1001 20:07:47.972941   56047 main.go:141] libmachine: Using SSH client type: native
	I1001 20:07:47.973152   56047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I1001 20:07:47.973172   56047 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 20:07:48.223312   56047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 20:07:48.223342   56047 main.go:141] libmachine: Checking connection to Docker...
	I1001 20:07:48.223354   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetURL
	I1001 20:07:48.224792   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | Using libvirt version 6000000
	I1001 20:07:48.227874   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:48.228266   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:47", ip: ""} in network mk-force-systemd-env-528861: {Iface:virbr1 ExpiryTime:2024-10-01 21:07:41 +0000 UTC Type:0 Mac:52:54:00:73:c0:47 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:force-systemd-env-528861 Clientid:01:52:54:00:73:c0:47}
	I1001 20:07:48.228303   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined IP address 192.168.39.66 and MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:48.228494   56047 main.go:141] libmachine: Docker is up and running!
	I1001 20:07:48.228509   56047 main.go:141] libmachine: Reticulating splines...
	I1001 20:07:48.228516   56047 client.go:171] duration metric: took 21.597905879s to LocalClient.Create
	I1001 20:07:48.228539   56047 start.go:167] duration metric: took 21.597972555s to libmachine.API.Create "force-systemd-env-528861"
	I1001 20:07:48.228549   56047 start.go:293] postStartSetup for "force-systemd-env-528861" (driver="kvm2")
	I1001 20:07:48.228564   56047 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 20:07:48.228586   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .DriverName
	I1001 20:07:48.228809   56047 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 20:07:48.228833   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHHostname
	I1001 20:07:48.231286   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:48.231634   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:47", ip: ""} in network mk-force-systemd-env-528861: {Iface:virbr1 ExpiryTime:2024-10-01 21:07:41 +0000 UTC Type:0 Mac:52:54:00:73:c0:47 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:force-systemd-env-528861 Clientid:01:52:54:00:73:c0:47}
	I1001 20:07:48.231671   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined IP address 192.168.39.66 and MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:48.231910   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHPort
	I1001 20:07:48.232105   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHKeyPath
	I1001 20:07:48.232320   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHUsername
	I1001 20:07:48.232512   56047 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/force-systemd-env-528861/id_rsa Username:docker}
	I1001 20:07:48.319090   56047 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 20:07:48.323953   56047 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 20:07:48.323983   56047 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 20:07:48.324052   56047 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 20:07:48.324159   56047 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 20:07:48.324177   56047 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /etc/ssl/certs/184302.pem
	I1001 20:07:48.324298   56047 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 20:07:48.334887   56047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:07:48.364743   56047 start.go:296] duration metric: took 136.180649ms for postStartSetup
	I1001 20:07:48.364787   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetConfigRaw
	I1001 20:07:48.365385   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetIP
	I1001 20:07:48.368907   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:48.369318   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:47", ip: ""} in network mk-force-systemd-env-528861: {Iface:virbr1 ExpiryTime:2024-10-01 21:07:41 +0000 UTC Type:0 Mac:52:54:00:73:c0:47 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:force-systemd-env-528861 Clientid:01:52:54:00:73:c0:47}
	I1001 20:07:48.369361   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined IP address 192.168.39.66 and MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:48.369919   56047 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/config.json ...
	I1001 20:07:48.370181   56047 start.go:128] duration metric: took 21.764377344s to createHost
	I1001 20:07:48.370210   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHHostname
	I1001 20:07:48.373519   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:48.373552   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:47", ip: ""} in network mk-force-systemd-env-528861: {Iface:virbr1 ExpiryTime:2024-10-01 21:07:41 +0000 UTC Type:0 Mac:52:54:00:73:c0:47 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:force-systemd-env-528861 Clientid:01:52:54:00:73:c0:47}
	I1001 20:07:48.373569   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined IP address 192.168.39.66 and MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:48.373601   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHPort
	I1001 20:07:48.373771   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHKeyPath
	I1001 20:07:48.373879   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHKeyPath
	I1001 20:07:48.374068   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHUsername
	I1001 20:07:48.374177   56047 main.go:141] libmachine: Using SSH client type: native
	I1001 20:07:48.374421   56047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I1001 20:07:48.374443   56047 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 20:07:48.481142   56047 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727813268.457992919
	
	I1001 20:07:48.481166   56047 fix.go:216] guest clock: 1727813268.457992919
	I1001 20:07:48.481176   56047 fix.go:229] Guest: 2024-10-01 20:07:48.457992919 +0000 UTC Remote: 2024-10-01 20:07:48.370195237 +0000 UTC m=+36.791349790 (delta=87.797682ms)
	I1001 20:07:48.481198   56047 fix.go:200] guest clock delta is within tolerance: 87.797682ms
	I1001 20:07:48.481205   56047 start.go:83] releasing machines lock for "force-systemd-env-528861", held for 21.875563213s
	I1001 20:07:48.481231   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .DriverName
	I1001 20:07:48.482075   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetIP
	I1001 20:07:48.485161   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:48.485514   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:47", ip: ""} in network mk-force-systemd-env-528861: {Iface:virbr1 ExpiryTime:2024-10-01 21:07:41 +0000 UTC Type:0 Mac:52:54:00:73:c0:47 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:force-systemd-env-528861 Clientid:01:52:54:00:73:c0:47}
	I1001 20:07:48.485545   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined IP address 192.168.39.66 and MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:48.485682   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .DriverName
	I1001 20:07:48.486281   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .DriverName
	I1001 20:07:48.486446   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .DriverName
	I1001 20:07:48.486547   56047 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 20:07:48.486583   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHHostname
	I1001 20:07:48.486681   56047 ssh_runner.go:195] Run: cat /version.json
	I1001 20:07:48.486699   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHHostname
	I1001 20:07:48.489494   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:48.489645   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:48.489862   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:47", ip: ""} in network mk-force-systemd-env-528861: {Iface:virbr1 ExpiryTime:2024-10-01 21:07:41 +0000 UTC Type:0 Mac:52:54:00:73:c0:47 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:force-systemd-env-528861 Clientid:01:52:54:00:73:c0:47}
	I1001 20:07:48.489890   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined IP address 192.168.39.66 and MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:48.490059   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHPort
	I1001 20:07:48.490101   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:47", ip: ""} in network mk-force-systemd-env-528861: {Iface:virbr1 ExpiryTime:2024-10-01 21:07:41 +0000 UTC Type:0 Mac:52:54:00:73:c0:47 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:force-systemd-env-528861 Clientid:01:52:54:00:73:c0:47}
	I1001 20:07:48.490126   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined IP address 192.168.39.66 and MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:48.490383   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHKeyPath
	I1001 20:07:48.490618   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHUsername
	I1001 20:07:48.490618   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHPort
	I1001 20:07:48.490790   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHKeyPath
	I1001 20:07:48.490784   56047 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/force-systemd-env-528861/id_rsa Username:docker}
	I1001 20:07:48.490946   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetSSHUsername
	I1001 20:07:48.491106   56047 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/force-systemd-env-528861/id_rsa Username:docker}
	I1001 20:07:48.573646   56047 ssh_runner.go:195] Run: systemctl --version
	I1001 20:07:48.613777   56047 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 20:07:48.778615   56047 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 20:07:48.784683   56047 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 20:07:48.784758   56047 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 20:07:48.802424   56047 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 20:07:48.802449   56047 start.go:495] detecting cgroup driver to use...
	I1001 20:07:48.802468   56047 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1001 20:07:48.802527   56047 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 20:07:48.820800   56047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 20:07:48.836249   56047 docker.go:217] disabling cri-docker service (if available) ...
	I1001 20:07:48.836305   56047 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 20:07:48.851647   56047 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 20:07:48.866171   56047 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 20:07:49.010704   56047 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 20:07:49.196240   56047 docker.go:233] disabling docker service ...
	I1001 20:07:49.196378   56047 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 20:07:49.211555   56047 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 20:07:49.225976   56047 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 20:07:49.366504   56047 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 20:07:49.512164   56047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 20:07:49.529059   56047 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 20:07:49.548054   56047 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 20:07:49.548112   56047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:07:49.560126   56047 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1001 20:07:49.560205   56047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:07:49.570528   56047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:07:49.581451   56047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:07:49.591722   56047 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 20:07:49.602397   56047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:07:49.613814   56047 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:07:49.631407   56047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:07:49.642295   56047 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 20:07:49.652004   56047 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 20:07:49.652066   56047 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 20:07:49.664863   56047 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 20:07:49.674589   56047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:07:49.803075   56047 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 20:07:49.903387   56047 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 20:07:49.903480   56047 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 20:07:49.908640   56047 start.go:563] Will wait 60s for crictl version
	I1001 20:07:49.908706   56047 ssh_runner.go:195] Run: which crictl
	I1001 20:07:49.912681   56047 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 20:07:49.952244   56047 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 20:07:49.952334   56047 ssh_runner.go:195] Run: crio --version
	I1001 20:07:49.980316   56047 ssh_runner.go:195] Run: crio --version
	I1001 20:07:50.012984   56047 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 20:07:50.014215   56047 main.go:141] libmachine: (force-systemd-env-528861) Calling .GetIP
	I1001 20:07:50.017194   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:50.017590   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:c0:47", ip: ""} in network mk-force-systemd-env-528861: {Iface:virbr1 ExpiryTime:2024-10-01 21:07:41 +0000 UTC Type:0 Mac:52:54:00:73:c0:47 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:force-systemd-env-528861 Clientid:01:52:54:00:73:c0:47}
	I1001 20:07:50.017627   56047 main.go:141] libmachine: (force-systemd-env-528861) DBG | domain force-systemd-env-528861 has defined IP address 192.168.39.66 and MAC address 52:54:00:73:c0:47 in network mk-force-systemd-env-528861
	I1001 20:07:50.017811   56047 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 20:07:50.022109   56047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:07:50.035700   56047 kubeadm.go:883] updating cluster {Name:force-systemd-env-528861 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:force-systemd-env-528861 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.66 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 20:07:50.035809   56047 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 20:07:50.035861   56047 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:07:50.069305   56047 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1001 20:07:50.069369   56047 ssh_runner.go:195] Run: which lz4
	I1001 20:07:50.073748   56047 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1001 20:07:50.073890   56047 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 20:07:50.078537   56047 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 20:07:50.078576   56047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1001 20:07:51.295863   56047 crio.go:462] duration metric: took 1.222032364s to copy over tarball
	I1001 20:07:51.295945   56047 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 20:07:48.499715   56571 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I1001 20:07:48.499764   56571 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I1001 20:07:48.499782   56571 cache.go:56] Caching tarball of preloaded images
	I1001 20:07:48.499901   56571 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 20:07:48.499915   56571 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I1001 20:07:48.500022   56571 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/running-upgrade-819936/config.json ...
	I1001 20:07:48.500255   56571 start.go:360] acquireMachinesLock for running-upgrade-819936: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 20:07:50.170070   55919 pod_ready.go:103] pod "kube-apiserver-pause-170137" in "kube-system" namespace has status "Ready":"False"
	I1001 20:07:52.171830   55919 pod_ready.go:103] pod "kube-apiserver-pause-170137" in "kube-system" namespace has status "Ready":"False"
	I1001 20:07:53.369674   56047 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.07369949s)
	I1001 20:07:53.369705   56047 crio.go:469] duration metric: took 2.073808181s to extract the tarball
	I1001 20:07:53.369714   56047 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 20:07:53.408052   56047 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:07:53.456798   56047 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 20:07:53.456825   56047 cache_images.go:84] Images are preloaded, skipping loading
	I1001 20:07:53.456836   56047 kubeadm.go:934] updating node { 192.168.39.66 8443 v1.31.1 crio true true} ...
	I1001 20:07:53.456968   56047 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-528861 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.66
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-528861 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 20:07:53.457051   56047 ssh_runner.go:195] Run: crio config
	I1001 20:07:53.511823   56047 cni.go:84] Creating CNI manager for ""
	I1001 20:07:53.511846   56047 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:07:53.511858   56047 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 20:07:53.511886   56047 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.66 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-528861 NodeName:force-systemd-env-528861 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.66"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.66 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 20:07:53.512061   56047 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.66
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-528861"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.66
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.66"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 20:07:53.512133   56047 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 20:07:53.521872   56047 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 20:07:53.521953   56047 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 20:07:53.531548   56047 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1001 20:07:53.548297   56047 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 20:07:53.565187   56047 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I1001 20:07:53.582643   56047 ssh_runner.go:195] Run: grep 192.168.39.66	control-plane.minikube.internal$ /etc/hosts
	I1001 20:07:53.586533   56047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.66	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:07:53.599080   56047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:07:53.734897   56047 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:07:53.752424   56047 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861 for IP: 192.168.39.66
	I1001 20:07:53.752449   56047 certs.go:194] generating shared ca certs ...
	I1001 20:07:53.752467   56047 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:07:53.752650   56047 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 20:07:53.752708   56047 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 20:07:53.752721   56047 certs.go:256] generating profile certs ...
	I1001 20:07:53.752792   56047 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/client.key
	I1001 20:07:53.752822   56047 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/client.crt with IP's: []
	I1001 20:07:53.956316   56047 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/client.crt ...
	I1001 20:07:53.956349   56047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/client.crt: {Name:mke982f4e02ff76e17ff27206a5e1da9b85a3281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:07:53.956570   56047 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/client.key ...
	I1001 20:07:53.956585   56047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/client.key: {Name:mk1eb4310e5ca2e13b79bd0c9c2e5076207322d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:07:53.956691   56047 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/apiserver.key.7bfd6f5c
	I1001 20:07:53.956718   56047 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/apiserver.crt.7bfd6f5c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.66]
	I1001 20:07:54.105239   56047 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/apiserver.crt.7bfd6f5c ...
	I1001 20:07:54.105266   56047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/apiserver.crt.7bfd6f5c: {Name:mk7016dad2310bf49b1f63de91a994076e661861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:07:54.105453   56047 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/apiserver.key.7bfd6f5c ...
	I1001 20:07:54.105470   56047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/apiserver.key.7bfd6f5c: {Name:mka3d5404e7decb09790810073aa3571c1bc74ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:07:54.105569   56047 certs.go:381] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/apiserver.crt.7bfd6f5c -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/apiserver.crt
	I1001 20:07:54.105656   56047 certs.go:385] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/apiserver.key.7bfd6f5c -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/apiserver.key
	I1001 20:07:54.105717   56047 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/proxy-client.key
	I1001 20:07:54.105734   56047 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/proxy-client.crt with IP's: []
	I1001 20:07:54.217196   56047 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/proxy-client.crt ...
	I1001 20:07:54.217227   56047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/proxy-client.crt: {Name:mk22fa10cba3e3e3a5c233204fa63d7e63e068e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:07:54.217389   56047 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/proxy-client.key ...
	I1001 20:07:54.217401   56047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/proxy-client.key: {Name:mka36d6f6aaace236793a7cd4c8cca5f84c542e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:07:54.217467   56047 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 20:07:54.217485   56047 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 20:07:54.217495   56047 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 20:07:54.217509   56047 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 20:07:54.217522   56047 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 20:07:54.217537   56047 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 20:07:54.217549   56047 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 20:07:54.217560   56047 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 20:07:54.217607   56047 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 20:07:54.217641   56047 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 20:07:54.217650   56047 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 20:07:54.217673   56047 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 20:07:54.217699   56047 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 20:07:54.217722   56047 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 20:07:54.217758   56047 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:07:54.217793   56047 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> /usr/share/ca-certificates/184302.pem
	I1001 20:07:54.217807   56047 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:07:54.217819   56047 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem -> /usr/share/ca-certificates/18430.pem
	I1001 20:07:54.218326   56047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 20:07:54.246459   56047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 20:07:54.271808   56047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 20:07:54.296051   56047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 20:07:54.319955   56047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1001 20:07:54.347584   56047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 20:07:54.370767   56047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 20:07:54.395167   56047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1001 20:07:54.421753   56047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 20:07:54.448647   56047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 20:07:54.476102   56047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 20:07:54.502899   56047 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 20:07:54.525595   56047 ssh_runner.go:195] Run: openssl version
	I1001 20:07:54.533787   56047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 20:07:54.548715   56047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 20:07:54.553801   56047 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 20:07:54.553869   56047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 20:07:54.562046   56047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 20:07:54.579314   56047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 20:07:54.593997   56047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:07:54.600199   56047 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:07:54.600264   56047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:07:54.607027   56047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 20:07:54.621750   56047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 20:07:54.636027   56047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 20:07:54.641848   56047 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 20:07:54.641904   56047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 20:07:54.649280   56047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 20:07:54.663411   56047 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 20:07:54.668910   56047 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 20:07:54.668962   56047 kubeadm.go:392] StartCluster: {Name:force-systemd-env-528861 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:force-systemd-env-528861 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.66 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:07:54.669045   56047 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 20:07:54.669113   56047 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 20:07:54.713943   56047 cri.go:89] found id: ""
	I1001 20:07:54.714048   56047 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 20:07:54.724411   56047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:07:54.734076   56047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:07:54.744066   56047 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:07:54.744084   56047 kubeadm.go:157] found existing configuration files:
	
	I1001 20:07:54.744121   56047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:07:54.755322   56047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:07:54.755387   56047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:07:54.768431   56047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:07:54.779237   56047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:07:54.779286   56047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:07:54.788768   56047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:07:54.798097   56047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:07:54.798163   56047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:07:54.808472   56047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:07:54.818701   56047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:07:54.818769   56047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:07:54.829443   56047 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:07:54.947163   56047 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 20:07:54.947233   56047 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:07:55.060142   56047 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:07:55.060259   56047 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:07:55.060422   56047 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 20:07:55.071662   56047 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:07:55.105187   56571 start.go:364] duration metric: took 6.604900517s to acquireMachinesLock for "running-upgrade-819936"
	I1001 20:07:55.105270   56571 start.go:96] Skipping create...Using existing machine configuration
	I1001 20:07:55.105278   56571 fix.go:54] fixHost starting: 
	I1001 20:07:55.105696   56571 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:07:55.105747   56571 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:07:55.123668   56571 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45645
	I1001 20:07:55.124185   56571 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:07:55.124786   56571 main.go:141] libmachine: Using API Version  1
	I1001 20:07:55.124814   56571 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:07:55.125197   56571 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:07:55.125414   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .DriverName
	I1001 20:07:55.125564   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetState
	I1001 20:07:55.127187   56571 fix.go:112] recreateIfNeeded on running-upgrade-819936: state=Running err=<nil>
	W1001 20:07:55.127208   56571 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 20:07:55.212329   56571 out.go:177] * Updating the running kvm2 "running-upgrade-819936" VM ...
	I1001 20:07:54.848684   56202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 20:07:54.848700   56202 machine.go:96] duration metric: took 6.34112791s to provisionDockerMachine
	I1001 20:07:54.848711   56202 start.go:293] postStartSetup for "NoKubernetes-791490" (driver="kvm2")
	I1001 20:07:54.848723   56202 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 20:07:54.848747   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .DriverName
	I1001 20:07:54.849051   56202 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 20:07:54.849079   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHHostname
	I1001 20:07:54.852558   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:54.852975   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:3c", ip: ""} in network mk-NoKubernetes-791490: {Iface:virbr3 ExpiryTime:2024-10-01 21:06:48 +0000 UTC Type:0 Mac:52:54:00:cf:81:3c Iaid: IPaddr:192.168.61.118 Prefix:24 Hostname:NoKubernetes-791490 Clientid:01:52:54:00:cf:81:3c}
	I1001 20:07:54.852997   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined IP address 192.168.61.118 and MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:54.853202   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHPort
	I1001 20:07:54.853385   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHKeyPath
	I1001 20:07:54.853537   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHUsername
	I1001 20:07:54.853691   56202 sshutil.go:53] new ssh client: &{IP:192.168.61.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/NoKubernetes-791490/id_rsa Username:docker}
	I1001 20:07:54.943357   56202 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 20:07:54.947939   56202 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 20:07:54.947957   56202 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 20:07:54.948047   56202 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 20:07:54.948144   56202 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 20:07:54.948259   56202 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 20:07:54.959075   56202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:07:54.984238   56202 start.go:296] duration metric: took 135.505843ms for postStartSetup
	I1001 20:07:54.984278   56202 fix.go:56] duration metric: took 6.502961587s for fixHost
	I1001 20:07:54.984296   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHHostname
	I1001 20:07:54.987330   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:54.987684   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:3c", ip: ""} in network mk-NoKubernetes-791490: {Iface:virbr3 ExpiryTime:2024-10-01 21:06:48 +0000 UTC Type:0 Mac:52:54:00:cf:81:3c Iaid: IPaddr:192.168.61.118 Prefix:24 Hostname:NoKubernetes-791490 Clientid:01:52:54:00:cf:81:3c}
	I1001 20:07:54.987724   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined IP address 192.168.61.118 and MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:54.987877   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHPort
	I1001 20:07:54.988087   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHKeyPath
	I1001 20:07:54.988252   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHKeyPath
	I1001 20:07:54.988430   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHUsername
	I1001 20:07:54.988626   56202 main.go:141] libmachine: Using SSH client type: native
	I1001 20:07:54.988782   56202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.118 22 <nil> <nil>}
	I1001 20:07:54.988786   56202 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 20:07:55.105047   56202 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727813275.098839603
	
	I1001 20:07:55.105061   56202 fix.go:216] guest clock: 1727813275.098839603
	I1001 20:07:55.105069   56202 fix.go:229] Guest: 2024-10-01 20:07:55.098839603 +0000 UTC Remote: 2024-10-01 20:07:54.984280495 +0000 UTC m=+35.684413566 (delta=114.559108ms)
	I1001 20:07:55.105090   56202 fix.go:200] guest clock delta is within tolerance: 114.559108ms
	I1001 20:07:55.105094   56202 start.go:83] releasing machines lock for "NoKubernetes-791490", held for 6.623801912s
	I1001 20:07:55.105116   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .DriverName
	I1001 20:07:55.105382   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetIP
	I1001 20:07:55.108653   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:55.109069   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:3c", ip: ""} in network mk-NoKubernetes-791490: {Iface:virbr3 ExpiryTime:2024-10-01 21:06:48 +0000 UTC Type:0 Mac:52:54:00:cf:81:3c Iaid: IPaddr:192.168.61.118 Prefix:24 Hostname:NoKubernetes-791490 Clientid:01:52:54:00:cf:81:3c}
	I1001 20:07:55.109100   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined IP address 192.168.61.118 and MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:55.109255   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .DriverName
	I1001 20:07:55.109830   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .DriverName
	I1001 20:07:55.110023   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .DriverName
	I1001 20:07:55.110099   56202 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 20:07:55.110135   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHHostname
	I1001 20:07:55.110187   56202 ssh_runner.go:195] Run: cat /version.json
	I1001 20:07:55.110203   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHHostname
	I1001 20:07:55.112924   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:55.113223   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:55.113244   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:3c", ip: ""} in network mk-NoKubernetes-791490: {Iface:virbr3 ExpiryTime:2024-10-01 21:06:48 +0000 UTC Type:0 Mac:52:54:00:cf:81:3c Iaid: IPaddr:192.168.61.118 Prefix:24 Hostname:NoKubernetes-791490 Clientid:01:52:54:00:cf:81:3c}
	I1001 20:07:55.113262   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined IP address 192.168.61.118 and MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:55.113528   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHPort
	I1001 20:07:55.113589   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:3c", ip: ""} in network mk-NoKubernetes-791490: {Iface:virbr3 ExpiryTime:2024-10-01 21:06:48 +0000 UTC Type:0 Mac:52:54:00:cf:81:3c Iaid: IPaddr:192.168.61.118 Prefix:24 Hostname:NoKubernetes-791490 Clientid:01:52:54:00:cf:81:3c}
	I1001 20:07:55.113610   56202 main.go:141] libmachine: (NoKubernetes-791490) DBG | domain NoKubernetes-791490 has defined IP address 192.168.61.118 and MAC address 52:54:00:cf:81:3c in network mk-NoKubernetes-791490
	I1001 20:07:55.113712   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHKeyPath
	I1001 20:07:55.113807   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHPort
	I1001 20:07:55.113883   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHUsername
	I1001 20:07:55.113953   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHKeyPath
	I1001 20:07:55.114021   56202 sshutil.go:53] new ssh client: &{IP:192.168.61.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/NoKubernetes-791490/id_rsa Username:docker}
	I1001 20:07:55.114045   56202 main.go:141] libmachine: (NoKubernetes-791490) Calling .GetSSHUsername
	I1001 20:07:55.114140   56202 sshutil.go:53] new ssh client: &{IP:192.168.61.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/NoKubernetes-791490/id_rsa Username:docker}
	I1001 20:07:55.193372   56202 ssh_runner.go:195] Run: systemctl --version
	I1001 20:07:55.237854   56202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:07:55.363053   56202 out.go:177]   - Kubernetes: Stopping ...
	I1001 20:07:54.671225   55919 pod_ready.go:103] pod "kube-apiserver-pause-170137" in "kube-system" namespace has status "Ready":"False"
	I1001 20:07:56.269117   55919 pod_ready.go:93] pod "kube-apiserver-pause-170137" in "kube-system" namespace has status "Ready":"True"
	I1001 20:07:56.269144   55919 pod_ready.go:82] duration metric: took 8.105420566s for pod "kube-apiserver-pause-170137" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:56.269155   55919 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-170137" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:56.275165   55919 pod_ready.go:93] pod "kube-controller-manager-pause-170137" in "kube-system" namespace has status "Ready":"True"
	I1001 20:07:56.275190   55919 pod_ready.go:82] duration metric: took 6.028005ms for pod "kube-controller-manager-pause-170137" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:56.275235   55919 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-ffrj7" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:56.281296   55919 pod_ready.go:93] pod "kube-proxy-ffrj7" in "kube-system" namespace has status "Ready":"True"
	I1001 20:07:56.281319   55919 pod_ready.go:82] duration metric: took 6.075723ms for pod "kube-proxy-ffrj7" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:56.281328   55919 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-170137" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:56.287288   55919 pod_ready.go:93] pod "kube-scheduler-pause-170137" in "kube-system" namespace has status "Ready":"True"
	I1001 20:07:56.287316   55919 pod_ready.go:82] duration metric: took 5.980459ms for pod "kube-scheduler-pause-170137" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:56.287327   55919 pod_ready.go:39] duration metric: took 9.15110471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:07:56.287347   55919 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 20:07:56.300618   55919 ops.go:34] apiserver oom_adj: -16
	I1001 20:07:56.300646   55919 kubeadm.go:597] duration metric: took 25.867921069s to restartPrimaryControlPlane
	I1001 20:07:56.300659   55919 kubeadm.go:394] duration metric: took 26.112186622s to StartCluster
	I1001 20:07:56.300679   55919 settings.go:142] acquiring lock: {Name:mkeb3fe64ed992373f048aef24eb0d675bcab60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:07:56.300762   55919 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:07:56.301842   55919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/kubeconfig: {Name:mk7ef19d546928001abf2478a3abe1d17765a591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:07:56.387704   55919 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.12 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 20:07:56.387806   55919 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 20:07:56.387905   55919 config.go:182] Loaded profile config "pause-170137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:07:56.522835   55919 out.go:177] * Enabled addons: 
	I1001 20:07:56.522848   55919 out.go:177] * Verifying Kubernetes components...
	I1001 20:07:55.212384   56047 out.go:235]   - Generating certificates and keys ...
	I1001 20:07:55.212499   56047 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:07:55.212592   56047 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:07:55.212694   56047 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 20:07:55.314090   56047 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 20:07:55.604716   56047 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 20:07:55.752224   56047 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 20:07:55.845604   56047 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 20:07:55.845742   56047 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-528861 localhost] and IPs [192.168.39.66 127.0.0.1 ::1]
	I1001 20:07:56.066284   56047 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 20:07:56.066494   56047 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-528861 localhost] and IPs [192.168.39.66 127.0.0.1 ::1]
	I1001 20:07:56.223381   56047 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 20:07:56.497682   56047 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 20:07:55.385468   56202 ssh_runner.go:195] Run: sudo systemctl stop -f kubelet
	I1001 20:07:55.437514   56202 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1001 20:07:55.437590   56202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 20:07:55.489009   56202 cri.go:89] found id: "d446ba59b9d5113f00c8c28ece976b5b6d4c24fd95235085fb1637f9c63441f8"
	I1001 20:07:55.489022   56202 cri.go:89] found id: "e7505f721355a85e318e0c958d3b126ad25d0593ce91549f5989cacdaf84a869"
	I1001 20:07:55.489027   56202 cri.go:89] found id: "de8e9d65a4e29b9c03e6325def3a47421ade230b27bb45b91721735391f56031"
	I1001 20:07:55.489030   56202 cri.go:89] found id: "c5cef1fe6ea2dcce51fc4f6431b8a7d8b12eb65f5a80f96efdcf4c9918dc0061"
	I1001 20:07:55.489033   56202 cri.go:89] found id: "6e793101d847107193fa51b3c563cc7b092216880cff8ef3aec1ccfb99321b27"
	I1001 20:07:55.489036   56202 cri.go:89] found id: "c3443d5e9a9fd506404391941119ce8905d84f27059b22d33e4dba4c3510ebb3"
	I1001 20:07:55.489039   56202 cri.go:89] found id: "a3dc296fa1355441bd1594fc0774cf837f0bcc3c967c4f9de86ee14f7fccb105"
	I1001 20:07:55.489042   56202 cri.go:89] found id: ""
	W1001 20:07:55.489061   56202 kubeadm.go:838] found 7 kube-system containers to stop
	I1001 20:07:55.489066   56202 cri.go:252] Stopping containers: [d446ba59b9d5113f00c8c28ece976b5b6d4c24fd95235085fb1637f9c63441f8 e7505f721355a85e318e0c958d3b126ad25d0593ce91549f5989cacdaf84a869 de8e9d65a4e29b9c03e6325def3a47421ade230b27bb45b91721735391f56031 c5cef1fe6ea2dcce51fc4f6431b8a7d8b12eb65f5a80f96efdcf4c9918dc0061 6e793101d847107193fa51b3c563cc7b092216880cff8ef3aec1ccfb99321b27 c3443d5e9a9fd506404391941119ce8905d84f27059b22d33e4dba4c3510ebb3 a3dc296fa1355441bd1594fc0774cf837f0bcc3c967c4f9de86ee14f7fccb105]
	I1001 20:07:55.489113   56202 ssh_runner.go:195] Run: which crictl
	I1001 20:07:55.493327   56202 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 d446ba59b9d5113f00c8c28ece976b5b6d4c24fd95235085fb1637f9c63441f8 e7505f721355a85e318e0c958d3b126ad25d0593ce91549f5989cacdaf84a869 de8e9d65a4e29b9c03e6325def3a47421ade230b27bb45b91721735391f56031 c5cef1fe6ea2dcce51fc4f6431b8a7d8b12eb65f5a80f96efdcf4c9918dc0061 6e793101d847107193fa51b3c563cc7b092216880cff8ef3aec1ccfb99321b27 c3443d5e9a9fd506404391941119ce8905d84f27059b22d33e4dba4c3510ebb3 a3dc296fa1355441bd1594fc0774cf837f0bcc3c967c4f9de86ee14f7fccb105
	I1001 20:07:57.024559   56202 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 d446ba59b9d5113f00c8c28ece976b5b6d4c24fd95235085fb1637f9c63441f8 e7505f721355a85e318e0c958d3b126ad25d0593ce91549f5989cacdaf84a869 de8e9d65a4e29b9c03e6325def3a47421ade230b27bb45b91721735391f56031 c5cef1fe6ea2dcce51fc4f6431b8a7d8b12eb65f5a80f96efdcf4c9918dc0061 6e793101d847107193fa51b3c563cc7b092216880cff8ef3aec1ccfb99321b27 c3443d5e9a9fd506404391941119ce8905d84f27059b22d33e4dba4c3510ebb3 a3dc296fa1355441bd1594fc0774cf837f0bcc3c967c4f9de86ee14f7fccb105: (1.531185855s)
	I1001 20:07:57.024623   56202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:07:57.046057   56202 out.go:177]   - Kubernetes: Stopped
	I1001 20:07:56.722295   56047 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 20:07:56.722404   56047 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:07:56.784999   56047 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:07:56.924546   56047 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 20:07:57.044753   56047 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:07:57.300602   56047 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:07:57.421316   56047 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:07:57.422009   56047 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:07:57.427190   56047 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:07:55.336962   56571 machine.go:93] provisionDockerMachine start ...
	I1001 20:07:55.337008   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .DriverName
	I1001 20:07:55.337314   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHHostname
	I1001 20:07:55.340329   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:55.340897   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:1e:d5", ip: ""} in network mk-running-upgrade-819936: {Iface:virbr4 ExpiryTime:2024-10-01 21:07:12 +0000 UTC Type:0 Mac:52:54:00:40:1e:d5 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:running-upgrade-819936 Clientid:01:52:54:00:40:1e:d5}
	I1001 20:07:55.340932   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined IP address 192.168.72.199 and MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:55.341204   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHPort
	I1001 20:07:55.341385   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHKeyPath
	I1001 20:07:55.341516   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHKeyPath
	I1001 20:07:55.341645   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHUsername
	I1001 20:07:55.341784   56571 main.go:141] libmachine: Using SSH client type: native
	I1001 20:07:55.342080   56571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I1001 20:07:55.342096   56571 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 20:07:55.464621   56571 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-819936
	
	I1001 20:07:55.464657   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetMachineName
	I1001 20:07:55.464920   56571 buildroot.go:166] provisioning hostname "running-upgrade-819936"
	I1001 20:07:55.464952   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetMachineName
	I1001 20:07:55.465190   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHHostname
	I1001 20:07:55.468218   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:55.468721   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:1e:d5", ip: ""} in network mk-running-upgrade-819936: {Iface:virbr4 ExpiryTime:2024-10-01 21:07:12 +0000 UTC Type:0 Mac:52:54:00:40:1e:d5 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:running-upgrade-819936 Clientid:01:52:54:00:40:1e:d5}
	I1001 20:07:55.468750   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined IP address 192.168.72.199 and MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:55.468923   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHPort
	I1001 20:07:55.469107   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHKeyPath
	I1001 20:07:55.469252   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHKeyPath
	I1001 20:07:55.469386   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHUsername
	I1001 20:07:55.469522   56571 main.go:141] libmachine: Using SSH client type: native
	I1001 20:07:55.469781   56571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I1001 20:07:55.469797   56571 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-819936 && echo "running-upgrade-819936" | sudo tee /etc/hostname
	I1001 20:07:55.600251   56571 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-819936
	
	I1001 20:07:55.600295   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHHostname
	I1001 20:07:55.602882   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:55.603208   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:1e:d5", ip: ""} in network mk-running-upgrade-819936: {Iface:virbr4 ExpiryTime:2024-10-01 21:07:12 +0000 UTC Type:0 Mac:52:54:00:40:1e:d5 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:running-upgrade-819936 Clientid:01:52:54:00:40:1e:d5}
	I1001 20:07:55.603247   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined IP address 192.168.72.199 and MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:55.603349   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHPort
	I1001 20:07:55.603527   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHKeyPath
	I1001 20:07:55.603688   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHKeyPath
	I1001 20:07:55.603842   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHUsername
	I1001 20:07:55.604005   56571 main.go:141] libmachine: Using SSH client type: native
	I1001 20:07:55.604215   56571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I1001 20:07:55.604241   56571 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-819936' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-819936/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-819936' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 20:07:55.721140   56571 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:07:55.721176   56571 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 20:07:55.721198   56571 buildroot.go:174] setting up certificates
	I1001 20:07:55.721209   56571 provision.go:84] configureAuth start
	I1001 20:07:55.721222   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetMachineName
	I1001 20:07:55.721607   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetIP
	I1001 20:07:55.724942   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:55.725363   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:1e:d5", ip: ""} in network mk-running-upgrade-819936: {Iface:virbr4 ExpiryTime:2024-10-01 21:07:12 +0000 UTC Type:0 Mac:52:54:00:40:1e:d5 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:running-upgrade-819936 Clientid:01:52:54:00:40:1e:d5}
	I1001 20:07:55.725387   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined IP address 192.168.72.199 and MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:55.725592   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHHostname
	I1001 20:07:55.728211   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:55.728572   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:1e:d5", ip: ""} in network mk-running-upgrade-819936: {Iface:virbr4 ExpiryTime:2024-10-01 21:07:12 +0000 UTC Type:0 Mac:52:54:00:40:1e:d5 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:running-upgrade-819936 Clientid:01:52:54:00:40:1e:d5}
	I1001 20:07:55.728599   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined IP address 192.168.72.199 and MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:55.728746   56571 provision.go:143] copyHostCerts
	I1001 20:07:55.728811   56571 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 20:07:55.728821   56571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 20:07:55.728873   56571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 20:07:55.728965   56571 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 20:07:55.728972   56571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 20:07:55.728991   56571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 20:07:55.729042   56571 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 20:07:55.729048   56571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 20:07:55.729065   56571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 20:07:55.729107   56571 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-819936 san=[127.0.0.1 192.168.72.199 localhost minikube running-upgrade-819936]
	I1001 20:07:55.869292   56571 provision.go:177] copyRemoteCerts
	I1001 20:07:55.869371   56571 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 20:07:55.869413   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHHostname
	I1001 20:07:55.872814   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:55.873271   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:1e:d5", ip: ""} in network mk-running-upgrade-819936: {Iface:virbr4 ExpiryTime:2024-10-01 21:07:12 +0000 UTC Type:0 Mac:52:54:00:40:1e:d5 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:running-upgrade-819936 Clientid:01:52:54:00:40:1e:d5}
	I1001 20:07:55.873319   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined IP address 192.168.72.199 and MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:55.873533   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHPort
	I1001 20:07:55.873734   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHKeyPath
	I1001 20:07:55.873909   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHUsername
	I1001 20:07:55.874054   56571 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/running-upgrade-819936/id_rsa Username:docker}
	I1001 20:07:55.961440   56571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 20:07:55.986467   56571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1001 20:07:56.072728   56571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 20:07:56.136647   56571 provision.go:87] duration metric: took 415.424317ms to configureAuth
	I1001 20:07:56.136679   56571 buildroot.go:189] setting minikube options for container-runtime
	I1001 20:07:56.136883   56571 config.go:182] Loaded profile config "running-upgrade-819936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1001 20:07:56.136962   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHHostname
	I1001 20:07:56.140121   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:56.140534   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:1e:d5", ip: ""} in network mk-running-upgrade-819936: {Iface:virbr4 ExpiryTime:2024-10-01 21:07:12 +0000 UTC Type:0 Mac:52:54:00:40:1e:d5 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:running-upgrade-819936 Clientid:01:52:54:00:40:1e:d5}
	I1001 20:07:56.140566   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined IP address 192.168.72.199 and MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:56.140782   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHPort
	I1001 20:07:56.140973   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHKeyPath
	I1001 20:07:56.141200   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHKeyPath
	I1001 20:07:56.141339   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHUsername
	I1001 20:07:56.141601   56571 main.go:141] libmachine: Using SSH client type: native
	I1001 20:07:56.141801   56571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I1001 20:07:56.141827   56571 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 20:07:57.504139   56571 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 20:07:57.504173   56571 machine.go:96] duration metric: took 2.167183612s to provisionDockerMachine
	I1001 20:07:57.504188   56571 start.go:293] postStartSetup for "running-upgrade-819936" (driver="kvm2")
	I1001 20:07:57.504201   56571 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 20:07:57.504222   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .DriverName
	I1001 20:07:57.504599   56571 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 20:07:57.504637   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHHostname
	I1001 20:07:57.507946   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:57.508390   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:1e:d5", ip: ""} in network mk-running-upgrade-819936: {Iface:virbr4 ExpiryTime:2024-10-01 21:07:12 +0000 UTC Type:0 Mac:52:54:00:40:1e:d5 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:running-upgrade-819936 Clientid:01:52:54:00:40:1e:d5}
	I1001 20:07:57.508522   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined IP address 192.168.72.199 and MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:57.508776   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHPort
	I1001 20:07:57.508998   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHKeyPath
	I1001 20:07:57.509189   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHUsername
	I1001 20:07:57.509344   56571 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/running-upgrade-819936/id_rsa Username:docker}
	I1001 20:07:57.630331   56571 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 20:07:57.636414   56571 info.go:137] Remote host: Buildroot 2021.02.12
	I1001 20:07:57.636455   56571 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 20:07:57.636524   56571 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 20:07:57.636619   56571 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 20:07:57.636738   56571 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 20:07:57.648058   56571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:07:57.684672   56571 start.go:296] duration metric: took 180.468704ms for postStartSetup
	I1001 20:07:57.684726   56571 fix.go:56] duration metric: took 2.579447551s for fixHost
	I1001 20:07:57.684752   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHHostname
	I1001 20:07:57.687410   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:57.687776   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:1e:d5", ip: ""} in network mk-running-upgrade-819936: {Iface:virbr4 ExpiryTime:2024-10-01 21:07:12 +0000 UTC Type:0 Mac:52:54:00:40:1e:d5 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:running-upgrade-819936 Clientid:01:52:54:00:40:1e:d5}
	I1001 20:07:57.687830   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined IP address 192.168.72.199 and MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:57.688061   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHPort
	I1001 20:07:57.688302   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHKeyPath
	I1001 20:07:57.688532   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHKeyPath
	I1001 20:07:57.688736   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHUsername
	I1001 20:07:57.688954   56571 main.go:141] libmachine: Using SSH client type: native
	I1001 20:07:57.689175   56571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I1001 20:07:57.689188   56571 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 20:07:57.820972   56571 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727813277.815752449
	
	I1001 20:07:57.821000   56571 fix.go:216] guest clock: 1727813277.815752449
	I1001 20:07:57.821010   56571 fix.go:229] Guest: 2024-10-01 20:07:57.815752449 +0000 UTC Remote: 2024-10-01 20:07:57.684732204 +0000 UTC m=+9.342850941 (delta=131.020245ms)
	I1001 20:07:57.821045   56571 fix.go:200] guest clock delta is within tolerance: 131.020245ms
	I1001 20:07:57.821051   56571 start.go:83] releasing machines lock for "running-upgrade-819936", held for 2.715805609s
	I1001 20:07:57.821082   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .DriverName
	I1001 20:07:57.821334   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetIP
	I1001 20:07:57.824127   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:57.824589   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:1e:d5", ip: ""} in network mk-running-upgrade-819936: {Iface:virbr4 ExpiryTime:2024-10-01 21:07:12 +0000 UTC Type:0 Mac:52:54:00:40:1e:d5 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:running-upgrade-819936 Clientid:01:52:54:00:40:1e:d5}
	I1001 20:07:57.824618   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined IP address 192.168.72.199 and MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:57.824796   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .DriverName
	I1001 20:07:57.825373   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .DriverName
	I1001 20:07:57.825578   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .DriverName
	I1001 20:07:57.825680   56571 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 20:07:57.825727   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHHostname
	I1001 20:07:57.825970   56571 ssh_runner.go:195] Run: cat /version.json
	I1001 20:07:57.826024   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHHostname
	I1001 20:07:57.828832   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:57.829208   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:1e:d5", ip: ""} in network mk-running-upgrade-819936: {Iface:virbr4 ExpiryTime:2024-10-01 21:07:12 +0000 UTC Type:0 Mac:52:54:00:40:1e:d5 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:running-upgrade-819936 Clientid:01:52:54:00:40:1e:d5}
	I1001 20:07:57.829238   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined IP address 192.168.72.199 and MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:57.829257   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:57.829507   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHPort
	I1001 20:07:57.829690   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHKeyPath
	I1001 20:07:57.829746   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:1e:d5", ip: ""} in network mk-running-upgrade-819936: {Iface:virbr4 ExpiryTime:2024-10-01 21:07:12 +0000 UTC Type:0 Mac:52:54:00:40:1e:d5 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:running-upgrade-819936 Clientid:01:52:54:00:40:1e:d5}
	I1001 20:07:57.829886   56571 main.go:141] libmachine: (running-upgrade-819936) DBG | domain running-upgrade-819936 has defined IP address 192.168.72.199 and MAC address 52:54:00:40:1e:d5 in network mk-running-upgrade-819936
	I1001 20:07:57.829859   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHPort
	I1001 20:07:57.829966   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHUsername
	I1001 20:07:57.830258   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHKeyPath
	I1001 20:07:57.830276   56571 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/running-upgrade-819936/id_rsa Username:docker}
	I1001 20:07:57.830413   56571 main.go:141] libmachine: (running-upgrade-819936) Calling .GetSSHUsername
	I1001 20:07:57.830527   56571 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/running-upgrade-819936/id_rsa Username:docker}
	W1001 20:07:57.954078   56571 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1001 20:07:57.954157   56571 ssh_runner.go:195] Run: systemctl --version
	I1001 20:07:57.961000   56571 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 20:07:58.122244   56571 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 20:07:58.128788   56571 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 20:07:58.128888   56571 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 20:07:58.147295   56571 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 20:07:58.147324   56571 start.go:495] detecting cgroup driver to use...
	I1001 20:07:58.147386   56571 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 20:07:58.161877   56571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 20:07:58.182773   56571 docker.go:217] disabling cri-docker service (if available) ...
	I1001 20:07:58.182822   56571 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 20:07:58.204011   56571 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 20:07:58.221564   56571 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 20:07:58.362802   56571 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 20:07:56.623679   55919 addons.go:510] duration metric: took 235.854123ms for enable addons: enabled=[]
	I1001 20:07:56.717797   55919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:07:56.879669   55919 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:07:56.897933   55919 node_ready.go:35] waiting up to 6m0s for node "pause-170137" to be "Ready" ...
	I1001 20:07:56.902462   55919 node_ready.go:49] node "pause-170137" has status "Ready":"True"
	I1001 20:07:56.902485   55919 node_ready.go:38] duration metric: took 4.517458ms for node "pause-170137" to be "Ready" ...
	I1001 20:07:56.902494   55919 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:07:56.912808   55919 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8tqn8" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:56.926065   55919 pod_ready.go:93] pod "coredns-7c65d6cfc9-8tqn8" in "kube-system" namespace has status "Ready":"True"
	I1001 20:07:56.926091   55919 pod_ready.go:82] duration metric: took 13.245449ms for pod "coredns-7c65d6cfc9-8tqn8" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:56.926103   55919 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-170137" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:56.933270   55919 pod_ready.go:93] pod "etcd-pause-170137" in "kube-system" namespace has status "Ready":"True"
	I1001 20:07:56.933303   55919 pod_ready.go:82] duration metric: took 7.193325ms for pod "etcd-pause-170137" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:56.933315   55919 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-170137" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:57.270116   55919 pod_ready.go:93] pod "kube-apiserver-pause-170137" in "kube-system" namespace has status "Ready":"True"
	I1001 20:07:57.270149   55919 pod_ready.go:82] duration metric: took 336.824932ms for pod "kube-apiserver-pause-170137" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:57.270163   55919 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-170137" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:57.670205   55919 pod_ready.go:93] pod "kube-controller-manager-pause-170137" in "kube-system" namespace has status "Ready":"True"
	I1001 20:07:57.670237   55919 pod_ready.go:82] duration metric: took 400.065681ms for pod "kube-controller-manager-pause-170137" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:57.670250   55919 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ffrj7" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:58.070376   55919 pod_ready.go:93] pod "kube-proxy-ffrj7" in "kube-system" namespace has status "Ready":"True"
	I1001 20:07:58.070401   55919 pod_ready.go:82] duration metric: took 400.143376ms for pod "kube-proxy-ffrj7" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:58.070416   55919 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-170137" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:58.470404   55919 pod_ready.go:93] pod "kube-scheduler-pause-170137" in "kube-system" namespace has status "Ready":"True"
	I1001 20:07:58.470426   55919 pod_ready.go:82] duration metric: took 400.002904ms for pod "kube-scheduler-pause-170137" in "kube-system" namespace to be "Ready" ...
	I1001 20:07:58.470435   55919 pod_ready.go:39] duration metric: took 1.567932016s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:07:58.470449   55919 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:07:58.470509   55919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:07:58.484928   55919 api_server.go:72] duration metric: took 2.097164049s to wait for apiserver process to appear ...
	I1001 20:07:58.484951   55919 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:07:58.484969   55919 api_server.go:253] Checking apiserver healthz at https://192.168.50.12:8443/healthz ...
	I1001 20:07:58.489846   55919 api_server.go:279] https://192.168.50.12:8443/healthz returned 200:
	ok
	I1001 20:07:58.490929   55919 api_server.go:141] control plane version: v1.31.1
	I1001 20:07:58.490952   55919 api_server.go:131] duration metric: took 5.994319ms to wait for apiserver health ...
	I1001 20:07:58.490962   55919 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:07:58.672922   55919 system_pods.go:59] 6 kube-system pods found
	I1001 20:07:58.672956   55919 system_pods.go:61] "coredns-7c65d6cfc9-8tqn8" [b42e5352-5fa7-4a31-97a6-13e95b760487] Running
	I1001 20:07:58.672964   55919 system_pods.go:61] "etcd-pause-170137" [2a159ffc-cdff-49c4-b46e-209ea3d9bc05] Running
	I1001 20:07:58.672969   55919 system_pods.go:61] "kube-apiserver-pause-170137" [fd49f289-ef88-47d0-987f-cd35b4bcc962] Running
	I1001 20:07:58.672974   55919 system_pods.go:61] "kube-controller-manager-pause-170137" [0f77180d-0176-41ef-b45c-7d8b7b175e4f] Running
	I1001 20:07:58.672980   55919 system_pods.go:61] "kube-proxy-ffrj7" [9579b36d-adb4-4b12-a1de-b318cb62b8a3] Running
	I1001 20:07:58.672986   55919 system_pods.go:61] "kube-scheduler-pause-170137" [bb44a227-37f4-4c2f-badf-8f9c9a7e49a9] Running
	I1001 20:07:58.672994   55919 system_pods.go:74] duration metric: took 182.02402ms to wait for pod list to return data ...
	I1001 20:07:58.673004   55919 default_sa.go:34] waiting for default service account to be created ...
	I1001 20:07:58.869348   55919 default_sa.go:45] found service account: "default"
	I1001 20:07:58.869380   55919 default_sa.go:55] duration metric: took 196.368683ms for default service account to be created ...
	I1001 20:07:58.869390   55919 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 20:07:59.073381   55919 system_pods.go:86] 6 kube-system pods found
	I1001 20:07:59.073420   55919 system_pods.go:89] "coredns-7c65d6cfc9-8tqn8" [b42e5352-5fa7-4a31-97a6-13e95b760487] Running
	I1001 20:07:59.073430   55919 system_pods.go:89] "etcd-pause-170137" [2a159ffc-cdff-49c4-b46e-209ea3d9bc05] Running
	I1001 20:07:59.073436   55919 system_pods.go:89] "kube-apiserver-pause-170137" [fd49f289-ef88-47d0-987f-cd35b4bcc962] Running
	I1001 20:07:59.073442   55919 system_pods.go:89] "kube-controller-manager-pause-170137" [0f77180d-0176-41ef-b45c-7d8b7b175e4f] Running
	I1001 20:07:59.073448   55919 system_pods.go:89] "kube-proxy-ffrj7" [9579b36d-adb4-4b12-a1de-b318cb62b8a3] Running
	I1001 20:07:59.073454   55919 system_pods.go:89] "kube-scheduler-pause-170137" [bb44a227-37f4-4c2f-badf-8f9c9a7e49a9] Running
	I1001 20:07:59.073463   55919 system_pods.go:126] duration metric: took 204.066101ms to wait for k8s-apps to be running ...
	I1001 20:07:59.073476   55919 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 20:07:59.073530   55919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:07:59.091532   55919 system_svc.go:56] duration metric: took 18.047988ms WaitForService to wait for kubelet
	I1001 20:07:59.091565   55919 kubeadm.go:582] duration metric: took 2.703802322s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:07:59.091588   55919 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:07:59.271326   55919 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:07:59.271350   55919 node_conditions.go:123] node cpu capacity is 2
	I1001 20:07:59.271361   55919 node_conditions.go:105] duration metric: took 179.768134ms to run NodePressure ...
	I1001 20:07:59.271371   55919 start.go:241] waiting for startup goroutines ...
	I1001 20:07:59.271378   55919 start.go:246] waiting for cluster config update ...
	I1001 20:07:59.271388   55919 start.go:255] writing updated cluster config ...
	I1001 20:07:59.271666   55919 ssh_runner.go:195] Run: rm -f paused
	I1001 20:07:59.332496   55919 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 20:07:59.334336   55919 out.go:177] * Done! kubectl is now configured to use "pause-170137" cluster and "default" namespace by default
	I1001 20:07:57.047576   56202 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 20:07:57.203813   56202 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 20:07:57.209752   56202 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 20:07:57.209830   56202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 20:07:57.219434   56202 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1001 20:07:57.219450   56202 start.go:495] detecting cgroup driver to use...
	I1001 20:07:57.219510   56202 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 20:07:57.236275   56202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 20:07:57.255188   56202 docker.go:217] disabling cri-docker service (if available) ...
	I1001 20:07:57.255247   56202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 20:07:57.275144   56202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 20:07:57.292673   56202 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 20:07:57.454891   56202 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 20:07:57.605800   56202 docker.go:233] disabling docker service ...
	I1001 20:07:57.605865   56202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 20:07:57.629040   56202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 20:07:57.647585   56202 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 20:07:57.809418   56202 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 20:07:57.993272   56202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 20:07:58.010771   56202 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 20:07:58.036481   56202 download.go:107] Downloading: https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm.sha1 -> /home/jenkins/minikube-integration/19736-11198/.minikube/cache/linux/amd64/v0.0.0/kubeadm
	I1001 20:07:58.598866   56202 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1001 20:07:58.598910   56202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:07:58.614129   56202 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 20:07:58.614179   56202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:07:58.628437   56202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:07:58.640336   56202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:07:58.650632   56202 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 20:07:58.661285   56202 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 20:07:58.672974   56202 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 20:07:58.682837   56202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:07:58.823059   56202 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 20:07:59.228016   56202 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 20:07:59.228070   56202 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 20:07:59.234822   56202 start.go:563] Will wait 60s for crictl version
	I1001 20:07:59.234885   56202 ssh_runner.go:195] Run: which crictl
	I1001 20:07:59.239469   56202 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 20:07:59.283556   56202 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 20:07:59.283640   56202 ssh_runner.go:195] Run: crio --version
	I1001 20:07:59.317329   56202 ssh_runner.go:195] Run: crio --version
	I1001 20:07:59.356160   56202 out.go:177] * Preparing CRI-O 1.29.1 ...
	I1001 20:07:59.357536   56202 ssh_runner.go:195] Run: rm -f paused
	I1001 20:07:59.365776   56202 out.go:177] * Done! minikube is ready without Kubernetes!
	I1001 20:07:59.368142   56202 out.go:201] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	
	
	==> CRI-O <==
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.142240321Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727813280142208261,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1969b30-b021-401e-b63e-aaf4c93c85c2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.142874390Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1cfe2651-0a65-4d66-ba5b-07dee4319b9e name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.142937337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1cfe2651-0a65-4d66-ba5b-07dee4319b9e name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.143273032Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:deedf3824bc9483ca76172a056cfd5164feda30f961e28daeabeaabe04e2c461,PodSandboxId:4c134d4b87189779ca1e5f046dfb42809bf2f2abbe419c6800900bd996f01821,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727813266001396140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ffrj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9579b36d-adb4-4b12-a1de-b318cb62b8a3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0914589df96b483b8ebfe729b2aa7ec0e5c4ad21e54b98dadde189397f0853ea,PodSandboxId:5a5e1a400cb410ad546ba4f8e49985a0baa6f1790ccd87ab9aca831b36a6e3dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727813263199677736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e51170df960e2d5f453b5738c1d025,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dabed2d08e52f4ba9d58733632c843f568a404800e60b3f0f832cd4aab283bc,PodSandboxId:9b55a4c23978acaf90c4390fe582d615987ec18383f8e584152c4bffae4d3151,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727813263210545432,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cd11daa5cdab6a4704ff3a11cdc428,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:956aef47f974a2576213aad53039396b3fd661c67b8377801c455d7c25af4d5a,PodSandboxId:120857355ab86b7130dac7f09e63a36f45ab94cff94b872058650ff83b68986c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727813253323500747,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e6a42ddfc1cb7a814eeb7af718d4bb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d7da33991924efa83fc62d469b68461bb89fd6437148b9dbdcb1e1960617df,PodSandboxId:6768cd3d490fce6181471abc753f441ac37c0df454687299db83648c397db6f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727813251412387497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03cea441204a38481b2802a79896a7c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fbea9ec018f6657691a05a645ab4a14fa248e6538e0aaa6b33f4bcdbc5e8d8,PodSandboxId:78107dc435911602060535be331a2735409bc3b55d403aa8e0cc2533e80f9c38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727813250946779412,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8tqn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42e5352-5fa7-4a31-97a6-13e95b760487,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499a59a4cf03309bc4eb2c7fd056ae703281dbbf3731c6a575f38480721834ec,PodSandboxId:9b55a4c23978acaf90c4390fe582d615987ec18383f8e584152c4bffae4d3151,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727813250097770225,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cd11daa5cdab6a4704ff3a11cdc428,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0d4a72c290677d3b877779ce453b0e69fa0015617db7f645ad4688395ab996,PodSandboxId:5a5e1a400cb410ad546ba4f8e49985a0baa6f1790ccd87ab9aca831b36a6e3dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727813249976897819,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e51170df960e2d5f453b5738c1d025,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b218656faf8165138d3e921d3db2d506d35669a650f07074504daff545fa656,PodSandboxId:4c134d4b87189779ca1e5f046dfb42809bf2f2abbe419c6800900bd996f01821,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727813249757550854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ffrj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9579b36d-adb4-4b12-a1de-b318cb62b8a3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667fbb3e5706931d62ec8064b0e3050f2b38f94dcd31cdef183965f0e1b01717,PodSandboxId:5020e2cbb1fa206129471503a766d62db17a661c3f70d486c66713855ceb4d1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727813216969711621,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8tqn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42e5352-5fa7-4a31-97a6-13e95b760487,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\
":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b60a5977fe573029539badfa21465c04853167aa7eec89399606bf02e954db7,PodSandboxId:60de18f0c7ded2535525e50ed8bbb3976400aecd3e1c5edef038ed7280581f84,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727813204351460001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-170137,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e6a42ddfc1cb7a814eeb7af718d4bb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97bce934e0f09a84c734707360b83dc0263004bd1431ad3aae6e7a0b1174e051,PodSandboxId:44f823e07f6564322d4b75c366c2252179746733b04f417efe785bc22cc3f254,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727813204283638376,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 03cea441204a38481b2802a79896a7c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1cfe2651-0a65-4d66-ba5b-07dee4319b9e name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.221705599Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c5839110-cc7a-460d-91c0-86c6e116b890 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.221803309Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c5839110-cc7a-460d-91c0-86c6e116b890 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.225053684Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4137129f-163e-4fff-b6a4-87096b1ec1e9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.225486298Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727813280225446868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4137129f-163e-4fff-b6a4-87096b1ec1e9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.226575387Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c08bf5a-d834-4429-b689-3981047ed8d0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.226657301Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c08bf5a-d834-4429-b689-3981047ed8d0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.227070292Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:deedf3824bc9483ca76172a056cfd5164feda30f961e28daeabeaabe04e2c461,PodSandboxId:4c134d4b87189779ca1e5f046dfb42809bf2f2abbe419c6800900bd996f01821,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727813266001396140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ffrj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9579b36d-adb4-4b12-a1de-b318cb62b8a3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0914589df96b483b8ebfe729b2aa7ec0e5c4ad21e54b98dadde189397f0853ea,PodSandboxId:5a5e1a400cb410ad546ba4f8e49985a0baa6f1790ccd87ab9aca831b36a6e3dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727813263199677736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e51170df960e2d5f453b5738c1d025,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dabed2d08e52f4ba9d58733632c843f568a404800e60b3f0f832cd4aab283bc,PodSandboxId:9b55a4c23978acaf90c4390fe582d615987ec18383f8e584152c4bffae4d3151,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727813263210545432,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cd11daa5cdab6a4704ff3a11cdc428,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:956aef47f974a2576213aad53039396b3fd661c67b8377801c455d7c25af4d5a,PodSandboxId:120857355ab86b7130dac7f09e63a36f45ab94cff94b872058650ff83b68986c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727813253323500747,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e6a42ddfc1cb7a814eeb7af718d4bb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d7da33991924efa83fc62d469b68461bb89fd6437148b9dbdcb1e1960617df,PodSandboxId:6768cd3d490fce6181471abc753f441ac37c0df454687299db83648c397db6f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727813251412387497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03cea441204a38481b2802a79896a7c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fbea9ec018f6657691a05a645ab4a14fa248e6538e0aaa6b33f4bcdbc5e8d8,PodSandboxId:78107dc435911602060535be331a2735409bc3b55d403aa8e0cc2533e80f9c38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727813250946779412,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8tqn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42e5352-5fa7-4a31-97a6-13e95b760487,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499a59a4cf03309bc4eb2c7fd056ae703281dbbf3731c6a575f38480721834ec,PodSandboxId:9b55a4c23978acaf90c4390fe582d615987ec18383f8e584152c4bffae4d3151,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727813250097770225,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cd11daa5cdab6a4704ff3a11cdc428,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0d4a72c290677d3b877779ce453b0e69fa0015617db7f645ad4688395ab996,PodSandboxId:5a5e1a400cb410ad546ba4f8e49985a0baa6f1790ccd87ab9aca831b36a6e3dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727813249976897819,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e51170df960e2d5f453b5738c1d025,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b218656faf8165138d3e921d3db2d506d35669a650f07074504daff545fa656,PodSandboxId:4c134d4b87189779ca1e5f046dfb42809bf2f2abbe419c6800900bd996f01821,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727813249757550854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ffrj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9579b36d-adb4-4b12-a1de-b318cb62b8a3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667fbb3e5706931d62ec8064b0e3050f2b38f94dcd31cdef183965f0e1b01717,PodSandboxId:5020e2cbb1fa206129471503a766d62db17a661c3f70d486c66713855ceb4d1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727813216969711621,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8tqn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42e5352-5fa7-4a31-97a6-13e95b760487,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\
":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b60a5977fe573029539badfa21465c04853167aa7eec89399606bf02e954db7,PodSandboxId:60de18f0c7ded2535525e50ed8bbb3976400aecd3e1c5edef038ed7280581f84,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727813204351460001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-170137,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e6a42ddfc1cb7a814eeb7af718d4bb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97bce934e0f09a84c734707360b83dc0263004bd1431ad3aae6e7a0b1174e051,PodSandboxId:44f823e07f6564322d4b75c366c2252179746733b04f417efe785bc22cc3f254,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727813204283638376,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 03cea441204a38481b2802a79896a7c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c08bf5a-d834-4429-b689-3981047ed8d0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.286670680Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e0a20f7c-60f3-46a6-a352-b8c36ce27c3c name=/runtime.v1.RuntimeService/Version
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.286784445Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e0a20f7c-60f3-46a6-a352-b8c36ce27c3c name=/runtime.v1.RuntimeService/Version
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.288374216Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f646a2d1-532d-4df2-9bda-96e0d5afc45f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.288952428Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727813280288915546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f646a2d1-532d-4df2-9bda-96e0d5afc45f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.289570116Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2c0335c-e95f-43dd-8fab-b52eb38b3e45 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.289624595Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2c0335c-e95f-43dd-8fab-b52eb38b3e45 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.289933600Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:deedf3824bc9483ca76172a056cfd5164feda30f961e28daeabeaabe04e2c461,PodSandboxId:4c134d4b87189779ca1e5f046dfb42809bf2f2abbe419c6800900bd996f01821,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727813266001396140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ffrj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9579b36d-adb4-4b12-a1de-b318cb62b8a3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0914589df96b483b8ebfe729b2aa7ec0e5c4ad21e54b98dadde189397f0853ea,PodSandboxId:5a5e1a400cb410ad546ba4f8e49985a0baa6f1790ccd87ab9aca831b36a6e3dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727813263199677736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e51170df960e2d5f453b5738c1d025,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dabed2d08e52f4ba9d58733632c843f568a404800e60b3f0f832cd4aab283bc,PodSandboxId:9b55a4c23978acaf90c4390fe582d615987ec18383f8e584152c4bffae4d3151,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727813263210545432,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cd11daa5cdab6a4704ff3a11cdc428,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:956aef47f974a2576213aad53039396b3fd661c67b8377801c455d7c25af4d5a,PodSandboxId:120857355ab86b7130dac7f09e63a36f45ab94cff94b872058650ff83b68986c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727813253323500747,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e6a42ddfc1cb7a814eeb7af718d4bb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d7da33991924efa83fc62d469b68461bb89fd6437148b9dbdcb1e1960617df,PodSandboxId:6768cd3d490fce6181471abc753f441ac37c0df454687299db83648c397db6f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727813251412387497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03cea441204a38481b2802a79896a7c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fbea9ec018f6657691a05a645ab4a14fa248e6538e0aaa6b33f4bcdbc5e8d8,PodSandboxId:78107dc435911602060535be331a2735409bc3b55d403aa8e0cc2533e80f9c38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727813250946779412,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8tqn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42e5352-5fa7-4a31-97a6-13e95b760487,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499a59a4cf03309bc4eb2c7fd056ae703281dbbf3731c6a575f38480721834ec,PodSandboxId:9b55a4c23978acaf90c4390fe582d615987ec18383f8e584152c4bffae4d3151,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727813250097770225,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cd11daa5cdab6a4704ff3a11cdc428,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0d4a72c290677d3b877779ce453b0e69fa0015617db7f645ad4688395ab996,PodSandboxId:5a5e1a400cb410ad546ba4f8e49985a0baa6f1790ccd87ab9aca831b36a6e3dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727813249976897819,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e51170df960e2d5f453b5738c1d025,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b218656faf8165138d3e921d3db2d506d35669a650f07074504daff545fa656,PodSandboxId:4c134d4b87189779ca1e5f046dfb42809bf2f2abbe419c6800900bd996f01821,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727813249757550854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ffrj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9579b36d-adb4-4b12-a1de-b318cb62b8a3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667fbb3e5706931d62ec8064b0e3050f2b38f94dcd31cdef183965f0e1b01717,PodSandboxId:5020e2cbb1fa206129471503a766d62db17a661c3f70d486c66713855ceb4d1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727813216969711621,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8tqn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42e5352-5fa7-4a31-97a6-13e95b760487,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\
":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b60a5977fe573029539badfa21465c04853167aa7eec89399606bf02e954db7,PodSandboxId:60de18f0c7ded2535525e50ed8bbb3976400aecd3e1c5edef038ed7280581f84,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727813204351460001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-170137,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e6a42ddfc1cb7a814eeb7af718d4bb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97bce934e0f09a84c734707360b83dc0263004bd1431ad3aae6e7a0b1174e051,PodSandboxId:44f823e07f6564322d4b75c366c2252179746733b04f417efe785bc22cc3f254,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727813204283638376,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 03cea441204a38481b2802a79896a7c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2c0335c-e95f-43dd-8fab-b52eb38b3e45 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.345814797Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bec482ff-1158-4328-9a0a-697d91fe6e18 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.346019941Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bec482ff-1158-4328-9a0a-697d91fe6e18 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.347030679Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=75684191-7f70-48c7-945e-65932916943a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.347463492Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727813280347436559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=75684191-7f70-48c7-945e-65932916943a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.347984985Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6aa4c7c-bad5-4dd5-af8d-bacf3859eb0b name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.348053729Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6aa4c7c-bad5-4dd5-af8d-bacf3859eb0b name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:08:00 pause-170137 crio[2234]: time="2024-10-01 20:08:00.348331929Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:deedf3824bc9483ca76172a056cfd5164feda30f961e28daeabeaabe04e2c461,PodSandboxId:4c134d4b87189779ca1e5f046dfb42809bf2f2abbe419c6800900bd996f01821,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727813266001396140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ffrj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9579b36d-adb4-4b12-a1de-b318cb62b8a3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0914589df96b483b8ebfe729b2aa7ec0e5c4ad21e54b98dadde189397f0853ea,PodSandboxId:5a5e1a400cb410ad546ba4f8e49985a0baa6f1790ccd87ab9aca831b36a6e3dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727813263199677736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e51170df960e2d5f453b5738c1d025,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dabed2d08e52f4ba9d58733632c843f568a404800e60b3f0f832cd4aab283bc,PodSandboxId:9b55a4c23978acaf90c4390fe582d615987ec18383f8e584152c4bffae4d3151,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727813263210545432,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cd11daa5cdab6a4704ff3a11cdc428,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:956aef47f974a2576213aad53039396b3fd661c67b8377801c455d7c25af4d5a,PodSandboxId:120857355ab86b7130dac7f09e63a36f45ab94cff94b872058650ff83b68986c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727813253323500747,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e6a42ddfc1cb7a814eeb7af718d4bb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d7da33991924efa83fc62d469b68461bb89fd6437148b9dbdcb1e1960617df,PodSandboxId:6768cd3d490fce6181471abc753f441ac37c0df454687299db83648c397db6f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727813251412387497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03cea441204a38481b2802a79896a7c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fbea9ec018f6657691a05a645ab4a14fa248e6538e0aaa6b33f4bcdbc5e8d8,PodSandboxId:78107dc435911602060535be331a2735409bc3b55d403aa8e0cc2533e80f9c38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727813250946779412,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8tqn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42e5352-5fa7-4a31-97a6-13e95b760487,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499a59a4cf03309bc4eb2c7fd056ae703281dbbf3731c6a575f38480721834ec,PodSandboxId:9b55a4c23978acaf90c4390fe582d615987ec18383f8e584152c4bffae4d3151,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727813250097770225,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cd11daa5cdab6a4704ff3a11cdc428,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0d4a72c290677d3b877779ce453b0e69fa0015617db7f645ad4688395ab996,PodSandboxId:5a5e1a400cb410ad546ba4f8e49985a0baa6f1790ccd87ab9aca831b36a6e3dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727813249976897819,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e51170df960e2d5f453b5738c1d025,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b218656faf8165138d3e921d3db2d506d35669a650f07074504daff545fa656,PodSandboxId:4c134d4b87189779ca1e5f046dfb42809bf2f2abbe419c6800900bd996f01821,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727813249757550854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ffrj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9579b36d-adb4-4b12-a1de-b318cb62b8a3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667fbb3e5706931d62ec8064b0e3050f2b38f94dcd31cdef183965f0e1b01717,PodSandboxId:5020e2cbb1fa206129471503a766d62db17a661c3f70d486c66713855ceb4d1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727813216969711621,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8tqn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42e5352-5fa7-4a31-97a6-13e95b760487,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\
":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b60a5977fe573029539badfa21465c04853167aa7eec89399606bf02e954db7,PodSandboxId:60de18f0c7ded2535525e50ed8bbb3976400aecd3e1c5edef038ed7280581f84,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727813204351460001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-170137,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e6a42ddfc1cb7a814eeb7af718d4bb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97bce934e0f09a84c734707360b83dc0263004bd1431ad3aae6e7a0b1174e051,PodSandboxId:44f823e07f6564322d4b75c366c2252179746733b04f417efe785bc22cc3f254,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727813204283638376,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 03cea441204a38481b2802a79896a7c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e6aa4c7c-bad5-4dd5-af8d-bacf3859eb0b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	deedf3824bc94       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   14 seconds ago       Running             kube-proxy                2                   4c134d4b87189       kube-proxy-ffrj7
	4dabed2d08e52       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   17 seconds ago       Running             kube-apiserver            2                   9b55a4c23978a       kube-apiserver-pause-170137
	0914589df96b4       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   17 seconds ago       Running             kube-controller-manager   2                   5a5e1a400cb41       kube-controller-manager-pause-170137
	956aef47f974a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   27 seconds ago       Running             kube-scheduler            1                   120857355ab86       kube-scheduler-pause-170137
	60d7da3399192       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   29 seconds ago       Running             etcd                      1                   6768cd3d490fc       etcd-pause-170137
	54fbea9ec018f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   29 seconds ago       Running             coredns                   1                   78107dc435911       coredns-7c65d6cfc9-8tqn8
	499a59a4cf033       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   30 seconds ago       Exited              kube-apiserver            1                   9b55a4c23978a       kube-apiserver-pause-170137
	2e0d4a72c2906       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   30 seconds ago       Exited              kube-controller-manager   1                   5a5e1a400cb41       kube-controller-manager-pause-170137
	1b218656faf81       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   30 seconds ago       Exited              kube-proxy                1                   4c134d4b87189       kube-proxy-ffrj7
	667fbb3e57069       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   5020e2cbb1fa2       coredns-7c65d6cfc9-8tqn8
	6b60a5977fe57       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   About a minute ago   Exited              kube-scheduler            0                   60de18f0c7ded       kube-scheduler-pause-170137
	97bce934e0f09       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Exited              etcd                      0                   44f823e07f656       etcd-pause-170137
	
	
	==> coredns [54fbea9ec018f6657691a05a645ab4a14fa248e6538e0aaa6b33f4bcdbc5e8d8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:34318 - 52756 "HINFO IN 6913580291352706447.4293852202948120045. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015419442s
	
	
	==> coredns [667fbb3e5706931d62ec8064b0e3050f2b38f94dcd31cdef183965f0e1b01717] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46416 - 14803 "HINFO IN 1179570859722145490.3756129640611081246. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013723578s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-170137
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-170137
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=pause-170137
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T20_06_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 20:06:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-170137
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 20:07:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 20:07:45 +0000   Tue, 01 Oct 2024 20:06:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 20:07:45 +0000   Tue, 01 Oct 2024 20:06:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 20:07:45 +0000   Tue, 01 Oct 2024 20:06:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 20:07:45 +0000   Tue, 01 Oct 2024 20:06:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.12
	  Hostname:    pause-170137
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 e558b5aa64f3466ea34f8de6b68a4a28
	  System UUID:                e558b5aa-64f3-466e-a34f-8de6b68a4a28
	  Boot ID:                    6b66f759-787f-4745-a48e-ce3b5f47c632
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-8tqn8                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     66s
	  kube-system                 etcd-pause-170137                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         70s
	  kube-system                 kube-apiserver-pause-170137             250m (12%)    0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-controller-manager-pause-170137    200m (10%)    0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-proxy-ffrj7                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-scheduler-pause-170137             100m (5%)     0 (0%)      0 (0%)           0 (0%)         70s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 65s                kube-proxy       
	  Normal  Starting                 14s                kube-proxy       
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 71s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  71s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  70s                kubelet          Node pause-170137 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     70s                kubelet          Node pause-170137 status is now: NodeHasSufficientPID
	  Normal  NodeReady                70s                kubelet          Node pause-170137 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    70s                kubelet          Node pause-170137 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           67s                node-controller  Node pause-170137 event: Registered Node pause-170137 in Controller
	  Normal  Starting                 18s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18s (x8 over 18s)  kubelet          Node pause-170137 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x8 over 18s)  kubelet          Node pause-170137 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x7 over 18s)  kubelet          Node pause-170137 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12s                node-controller  Node pause-170137 event: Registered Node pause-170137 in Controller
	
	
	==> dmesg <==
	[ +10.278779] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.063293] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056456] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.173824] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.151371] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.301044] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +4.228110] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +4.325705] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.079350] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.493282] systemd-fstab-generator[1216]: Ignoring "noauto" option for root device
	[  +0.100548] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.815789] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +0.675524] kauditd_printk_skb: 43 callbacks suppressed
	[Oct 1 20:07] systemd-fstab-generator[1995]: Ignoring "noauto" option for root device
	[  +0.100341] kauditd_printk_skb: 49 callbacks suppressed
	[  +0.066940] systemd-fstab-generator[2007]: Ignoring "noauto" option for root device
	[  +0.202415] systemd-fstab-generator[2020]: Ignoring "noauto" option for root device
	[  +0.206490] systemd-fstab-generator[2033]: Ignoring "noauto" option for root device
	[  +0.365726] systemd-fstab-generator[2061]: Ignoring "noauto" option for root device
	[  +1.200534] systemd-fstab-generator[2374]: Ignoring "noauto" option for root device
	[  +4.034647] kauditd_printk_skb: 210 callbacks suppressed
	[  +9.187102] systemd-fstab-generator[3170]: Ignoring "noauto" option for root device
	[  +0.089973] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.606945] kauditd_printk_skb: 40 callbacks suppressed
	[  +7.625093] systemd-fstab-generator[3507]: Ignoring "noauto" option for root device
	
	
	==> etcd [60d7da33991924efa83fc62d469b68461bb89fd6437148b9dbdcb1e1960617df] <==
	{"level":"info","ts":"2024-10-01T20:07:34.414499Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.12:2379"}
	{"level":"info","ts":"2024-10-01T20:07:34.414879Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T20:07:34.416140Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-01T20:07:34.416638Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-01T20:07:34.416688Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-01T20:07:55.332473Z","caller":"traceutil/trace.go:171","msg":"trace[208550121] linearizableReadLoop","detail":"{readStateIndex:494; appliedIndex:493; }","duration":"171.73869ms","start":"2024-10-01T20:07:55.160715Z","end":"2024-10-01T20:07:55.332454Z","steps":["trace[208550121] 'read index received'  (duration: 171.522596ms)","trace[208550121] 'applied index is now lower than readState.Index'  (duration: 215.255µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-01T20:07:55.332677Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.901799ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-170137\" ","response":"range_response_count:1 size:6990"}
	{"level":"info","ts":"2024-10-01T20:07:55.332773Z","caller":"traceutil/trace.go:171","msg":"trace[1002688057] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-170137; range_end:; response_count:1; response_revision:459; }","duration":"172.048635ms","start":"2024-10-01T20:07:55.160711Z","end":"2024-10-01T20:07:55.332760Z","steps":["trace[1002688057] 'agreement among raft nodes before linearized reading'  (duration: 171.834832ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T20:07:55.332965Z","caller":"traceutil/trace.go:171","msg":"trace[1612129183] transaction","detail":"{read_only:false; response_revision:459; number_of_response:1; }","duration":"181.03083ms","start":"2024-10-01T20:07:55.151924Z","end":"2024-10-01T20:07:55.332955Z","steps":["trace[1612129183] 'process raft request'  (duration: 180.404731ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:07:56.252332Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.533153ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T20:07:56.252469Z","caller":"traceutil/trace.go:171","msg":"trace[172454282] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:459; }","duration":"236.685685ms","start":"2024-10-01T20:07:56.015761Z","end":"2024-10-01T20:07:56.252447Z","steps":["trace[172454282] 'range keys from in-memory index tree'  (duration: 236.521045ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:07:56.252908Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"330.50603ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T20:07:56.252970Z","caller":"traceutil/trace.go:171","msg":"trace[654663731] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:459; }","duration":"330.576683ms","start":"2024-10-01T20:07:55.922386Z","end":"2024-10-01T20:07:56.252962Z","steps":["trace[654663731] 'range keys from in-memory index tree'  (duration: 330.497346ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:07:56.254028Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"475.55626ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5484982241399669423 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-170137\" mod_revision:459 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-170137\" value_size:6721 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-170137\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-01T20:07:56.254263Z","caller":"traceutil/trace.go:171","msg":"trace[1232195808] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"380.251731ms","start":"2024-10-01T20:07:55.874003Z","end":"2024-10-01T20:07:56.254255Z","steps":["trace[1232195808] 'process raft request'  (duration: 380.226044ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:07:56.254656Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T20:07:55.873981Z","time spent":"380.578364ms","remote":"127.0.0.1:57874","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-mtruts7gaawo33qz7nqe3y5c4a\" mod_revision:397 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-mtruts7gaawo33qz7nqe3y5c4a\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-mtruts7gaawo33qz7nqe3y5c4a\" > >"}
	{"level":"info","ts":"2024-10-01T20:07:56.256041Z","caller":"traceutil/trace.go:171","msg":"trace[2038229035] transaction","detail":"{read_only:false; response_revision:461; number_of_response:1; }","duration":"387.378927ms","start":"2024-10-01T20:07:55.868650Z","end":"2024-10-01T20:07:56.256029Z","steps":["trace[2038229035] 'process raft request'  (duration: 385.502576ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T20:07:56.256223Z","caller":"traceutil/trace.go:171","msg":"trace[49797945] linearizableReadLoop","detail":"{readStateIndex:495; appliedIndex:494; }","duration":"594.431371ms","start":"2024-10-01T20:07:55.660984Z","end":"2024-10-01T20:07:56.255415Z","steps":["trace[49797945] 'read index received'  (duration: 117.079071ms)","trace[49797945] 'applied index is now lower than readState.Index'  (duration: 477.350695ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-01T20:07:56.256327Z","caller":"traceutil/trace.go:171","msg":"trace[424267893] transaction","detail":"{read_only:false; response_revision:460; number_of_response:1; }","duration":"912.816707ms","start":"2024-10-01T20:07:55.343493Z","end":"2024-10-01T20:07:56.256309Z","steps":["trace[424267893] 'process raft request'  (duration: 434.512998ms)","trace[424267893] 'compare'  (duration: 474.598753ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-01T20:07:56.256393Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T20:07:55.343476Z","time spent":"912.880504ms","remote":"127.0.0.1:57790","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6783,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-170137\" mod_revision:459 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-170137\" value_size:6721 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-170137\" > >"}
	{"level":"warn","ts":"2024-10-01T20:07:56.256515Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"595.57418ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-170137\" ","response":"range_response_count:1 size:6798"}
	{"level":"info","ts":"2024-10-01T20:07:56.256551Z","caller":"traceutil/trace.go:171","msg":"trace[2100135488] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-170137; range_end:; response_count:1; response_revision:462; }","duration":"595.607514ms","start":"2024-10-01T20:07:55.660935Z","end":"2024-10-01T20:07:56.256543Z","steps":["trace[2100135488] 'agreement among raft nodes before linearized reading'  (duration: 595.555213ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:07:56.256575Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T20:07:55.660898Z","time spent":"595.671615ms","remote":"127.0.0.1:57790","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":6820,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-pause-170137\" "}
	{"level":"warn","ts":"2024-10-01T20:07:56.256230Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T20:07:55.868626Z","time spent":"387.525773ms","remote":"127.0.0.1:57874","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":537,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-170137\" mod_revision:396 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-170137\" value_size:484 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-170137\" > >"}
	{"level":"info","ts":"2024-10-01T20:07:56.892617Z","caller":"traceutil/trace.go:171","msg":"trace[1622587799] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"178.292665ms","start":"2024-10-01T20:07:56.714284Z","end":"2024-10-01T20:07:56.892576Z","steps":["trace[1622587799] 'process raft request'  (duration: 125.873572ms)","trace[1622587799] 'compare'  (duration: 52.318845ms)"],"step_count":2}
	
	
	==> etcd [97bce934e0f09a84c734707360b83dc0263004bd1431ad3aae6e7a0b1174e051] <==
	{"level":"info","ts":"2024-10-01T20:06:45.204446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3a3129d9bdcf4c1e became leader at term 2"}
	{"level":"info","ts":"2024-10-01T20:06:45.204472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3a3129d9bdcf4c1e elected leader 3a3129d9bdcf4c1e at term 2"}
	{"level":"info","ts":"2024-10-01T20:06:45.215125Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T20:06:45.218330Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3a3129d9bdcf4c1e","local-member-attributes":"{Name:pause-170137 ClientURLs:[https://192.168.50.12:2379]}","request-path":"/0/members/3a3129d9bdcf4c1e/attributes","cluster-id":"53bd91c8d6bcbd47","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-01T20:06:45.218583Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T20:06:45.218715Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T20:06:45.219051Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-01T20:06:45.240026Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-01T20:06:45.219724Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T20:06:45.219945Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"53bd91c8d6bcbd47","local-member-id":"3a3129d9bdcf4c1e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T20:06:45.241251Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T20:06:45.241321Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T20:06:45.236963Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T20:06:45.242162Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-01T20:06:45.245333Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.12:2379"}
	{"level":"info","ts":"2024-10-01T20:07:20.895529Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-01T20:07:20.895699Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-170137","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.12:2380"],"advertise-client-urls":["https://192.168.50.12:2379"]}
	{"level":"warn","ts":"2024-10-01T20:07:20.895926Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-01T20:07:20.896070Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-01T20:07:20.979937Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.12:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-01T20:07:20.980073Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.12:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-01T20:07:20.980390Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3a3129d9bdcf4c1e","current-leader-member-id":"3a3129d9bdcf4c1e"}
	{"level":"info","ts":"2024-10-01T20:07:20.983140Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.12:2380"}
	{"level":"info","ts":"2024-10-01T20:07:20.983260Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.12:2380"}
	{"level":"info","ts":"2024-10-01T20:07:20.983283Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-170137","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.12:2380"],"advertise-client-urls":["https://192.168.50.12:2379"]}
	
	
	==> kernel <==
	 20:08:00 up 1 min,  0 users,  load average: 1.14, 0.42, 0.15
	Linux pause-170137 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [499a59a4cf03309bc4eb2c7fd056ae703281dbbf3731c6a575f38480721834ec] <==
	I1001 20:07:36.116152       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1001 20:07:36.116294       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1001 20:07:36.116367       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I1001 20:07:36.116417       1 controller.go:84] Shutting down OpenAPI AggregationController
	I1001 20:07:36.116471       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I1001 20:07:36.120988       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1001 20:07:36.121061       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1001 20:07:36.121543       1 secure_serving.go:258] Stopped listening on [::]:8443
	I1001 20:07:36.121596       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1001 20:07:36.121736       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1001 20:07:36.124448       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1001 20:07:36.131217       1 controller.go:157] Shutting down quota evaluator
	I1001 20:07:36.131739       1 controller.go:176] quota evaluator worker shutdown
	I1001 20:07:36.131901       1 controller.go:176] quota evaluator worker shutdown
	I1001 20:07:36.131980       1 controller.go:176] quota evaluator worker shutdown
	I1001 20:07:36.132052       1 controller.go:176] quota evaluator worker shutdown
	I1001 20:07:36.132081       1 controller.go:176] quota evaluator worker shutdown
	W1001 20:07:36.848195       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1001 20:07:36.848409       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1001 20:07:37.848128       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1001 20:07:37.848624       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	E1001 20:07:38.847739       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1001 20:07:38.848370       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1001 20:07:39.847586       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1001 20:07:39.847672       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	
	
	==> kube-apiserver [4dabed2d08e52f4ba9d58733632c843f568a404800e60b3f0f832cd4aab283bc] <==
	I1001 20:07:45.571800       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1001 20:07:45.571953       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1001 20:07:45.572164       1 shared_informer.go:320] Caches are synced for configmaps
	I1001 20:07:45.572353       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1001 20:07:45.573501       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1001 20:07:45.573551       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1001 20:07:45.577481       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1001 20:07:45.583120       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1001 20:07:45.584324       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1001 20:07:45.584382       1 aggregator.go:171] initial CRD sync complete...
	I1001 20:07:45.584407       1 autoregister_controller.go:144] Starting autoregister controller
	I1001 20:07:45.584429       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1001 20:07:45.584451       1 cache.go:39] Caches are synced for autoregister controller
	I1001 20:07:45.588494       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1001 20:07:45.588538       1 policy_source.go:224] refreshing policies
	I1001 20:07:45.597333       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1001 20:07:46.371142       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1001 20:07:46.688209       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.12]
	I1001 20:07:46.689648       1 controller.go:615] quota admission added evaluator for: endpoints
	I1001 20:07:46.697333       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1001 20:07:46.962510       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1001 20:07:46.979455       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1001 20:07:47.053597       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1001 20:07:47.097296       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1001 20:07:47.113661       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [0914589df96b483b8ebfe729b2aa7ec0e5c4ad21e54b98dadde189397f0853ea] <==
	I1001 20:07:48.828867       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1001 20:07:48.830032       1 shared_informer.go:320] Caches are synced for expand
	I1001 20:07:48.830097       1 shared_informer.go:320] Caches are synced for crt configmap
	I1001 20:07:48.832420       1 shared_informer.go:320] Caches are synced for job
	I1001 20:07:48.836942       1 shared_informer.go:320] Caches are synced for ephemeral
	I1001 20:07:48.838170       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1001 20:07:48.844617       1 shared_informer.go:320] Caches are synced for GC
	I1001 20:07:48.844714       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1001 20:07:48.848005       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1001 20:07:48.850450       1 shared_informer.go:320] Caches are synced for service account
	I1001 20:07:48.852790       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1001 20:07:48.856386       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1001 20:07:48.856583       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="78.515µs"
	I1001 20:07:48.859767       1 shared_informer.go:320] Caches are synced for taint
	I1001 20:07:48.859917       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1001 20:07:48.860016       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-170137"
	I1001 20:07:48.860129       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1001 20:07:48.938780       1 shared_informer.go:320] Caches are synced for resource quota
	I1001 20:07:48.980998       1 shared_informer.go:320] Caches are synced for disruption
	I1001 20:07:48.982899       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1001 20:07:49.006773       1 shared_informer.go:320] Caches are synced for resource quota
	I1001 20:07:49.030919       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1001 20:07:49.478023       1 shared_informer.go:320] Caches are synced for garbage collector
	I1001 20:07:49.479079       1 shared_informer.go:320] Caches are synced for garbage collector
	I1001 20:07:49.479124       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [2e0d4a72c290677d3b877779ce453b0e69fa0015617db7f645ad4688395ab996] <==
	I1001 20:07:30.870035       1 serving.go:386] Generated self-signed cert in-memory
	I1001 20:07:31.699272       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I1001 20:07:31.699310       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 20:07:31.700794       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1001 20:07:31.700886       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1001 20:07:31.700902       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1001 20:07:31.700913       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-proxy [1b218656faf8165138d3e921d3db2d506d35669a650f07074504daff545fa656] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 20:07:30.766130       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 20:07:35.997082       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.12"]
	E1001 20:07:35.998695       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 20:07:36.058717       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 20:07:36.059307       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 20:07:36.061698       1 server_linux.go:169] "Using iptables Proxier"
	I1001 20:07:36.069699       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 20:07:36.070049       1 server.go:483] "Version info" version="v1.31.1"
	I1001 20:07:36.070075       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 20:07:36.090974       1 config.go:199] "Starting service config controller"
	I1001 20:07:36.091081       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 20:07:36.091819       1 config.go:328] "Starting node config controller"
	I1001 20:07:36.091944       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 20:07:36.092031       1 config.go:105] "Starting endpoint slice config controller"
	I1001 20:07:36.092050       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 20:07:36.192335       1 shared_informer.go:320] Caches are synced for service config
	I1001 20:07:36.192528       1 shared_informer.go:320] Caches are synced for node config
	I1001 20:07:36.192617       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [deedf3824bc9483ca76172a056cfd5164feda30f961e28daeabeaabe04e2c461] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 20:07:46.188533       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 20:07:46.198165       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.12"]
	E1001 20:07:46.198257       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 20:07:46.246206       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 20:07:46.246281       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 20:07:46.246316       1 server_linux.go:169] "Using iptables Proxier"
	I1001 20:07:46.249594       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 20:07:46.249877       1 server.go:483] "Version info" version="v1.31.1"
	I1001 20:07:46.249903       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 20:07:46.252419       1 config.go:199] "Starting service config controller"
	I1001 20:07:46.252459       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 20:07:46.252486       1 config.go:105] "Starting endpoint slice config controller"
	I1001 20:07:46.252493       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 20:07:46.253015       1 config.go:328] "Starting node config controller"
	I1001 20:07:46.253038       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 20:07:46.353338       1 shared_informer.go:320] Caches are synced for node config
	I1001 20:07:46.353395       1 shared_informer.go:320] Caches are synced for service config
	I1001 20:07:46.353433       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6b60a5977fe573029539badfa21465c04853167aa7eec89399606bf02e954db7] <==
	E1001 20:06:47.950264       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1001 20:06:47.984058       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1001 20:06:47.984105       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 20:06:47.995695       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1001 20:06:47.995814       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 20:06:48.052976       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1001 20:06:48.053070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 20:06:48.092032       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 20:06:48.092140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1001 20:06:48.250454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1001 20:06:48.250597       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:06:48.265582       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1001 20:06:48.266543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:06:48.309912       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1001 20:06:48.310048       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:06:48.336233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 20:06:48.336340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:06:48.347534       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1001 20:06:48.347636       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:06:48.389258       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 20:06:48.389380       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1001 20:06:51.103541       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1001 20:07:20.894063       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1001 20:07:20.894232       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1001 20:07:20.894414       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [956aef47f974a2576213aad53039396b3fd661c67b8377801c455d7c25af4d5a] <==
	W1001 20:07:35.892074       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1001 20:07:35.892133       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1001 20:07:35.892165       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1001 20:07:35.977624       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1001 20:07:35.980881       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 20:07:35.984101       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1001 20:07:35.985047       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 20:07:35.987564       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1001 20:07:35.985084       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1001 20:07:36.087868       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1001 20:07:45.394165       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E1001 20:07:45.394247       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E1001 20:07:45.394284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E1001 20:07:45.394332       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E1001 20:07:45.394382       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E1001 20:07:45.394436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E1001 20:07:45.394499       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E1001 20:07:45.394558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E1001 20:07:45.394620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E1001 20:07:45.394651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E1001 20:07:45.394707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E1001 20:07:45.394772       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E1001 20:07:45.394855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E1001 20:07:45.394893       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E1001 20:07:45.489166       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	
	
	==> kubelet <==
	Oct 01 20:07:42 pause-170137 kubelet[3177]: I1001 20:07:42.946067    3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/03cea441204a38481b2802a79896a7c8-etcd-certs\") pod \"etcd-pause-170137\" (UID: \"03cea441204a38481b2802a79896a7c8\") " pod="kube-system/etcd-pause-170137"
	Oct 01 20:07:42 pause-170137 kubelet[3177]: I1001 20:07:42.946096    3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/39cd11daa5cdab6a4704ff3a11cdc428-ca-certs\") pod \"kube-apiserver-pause-170137\" (UID: \"39cd11daa5cdab6a4704ff3a11cdc428\") " pod="kube-system/kube-apiserver-pause-170137"
	Oct 01 20:07:42 pause-170137 kubelet[3177]: I1001 20:07:42.946119    3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/39cd11daa5cdab6a4704ff3a11cdc428-k8s-certs\") pod \"kube-apiserver-pause-170137\" (UID: \"39cd11daa5cdab6a4704ff3a11cdc428\") " pod="kube-system/kube-apiserver-pause-170137"
	Oct 01 20:07:42 pause-170137 kubelet[3177]: I1001 20:07:42.946155    3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/39cd11daa5cdab6a4704ff3a11cdc428-usr-share-ca-certificates\") pod \"kube-apiserver-pause-170137\" (UID: \"39cd11daa5cdab6a4704ff3a11cdc428\") " pod="kube-system/kube-apiserver-pause-170137"
	Oct 01 20:07:42 pause-170137 kubelet[3177]: I1001 20:07:42.946195    3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29e51170df960e2d5f453b5738c1d025-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-170137\" (UID: \"29e51170df960e2d5f453b5738c1d025\") " pod="kube-system/kube-controller-manager-pause-170137"
	Oct 01 20:07:42 pause-170137 kubelet[3177]: I1001 20:07:42.946263    3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/03cea441204a38481b2802a79896a7c8-etcd-data\") pod \"etcd-pause-170137\" (UID: \"03cea441204a38481b2802a79896a7c8\") " pod="kube-system/etcd-pause-170137"
	Oct 01 20:07:42 pause-170137 kubelet[3177]: I1001 20:07:42.946295    3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29e51170df960e2d5f453b5738c1d025-k8s-certs\") pod \"kube-controller-manager-pause-170137\" (UID: \"29e51170df960e2d5f453b5738c1d025\") " pod="kube-system/kube-controller-manager-pause-170137"
	Oct 01 20:07:43 pause-170137 kubelet[3177]: I1001 20:07:43.105146    3177 kubelet_node_status.go:72] "Attempting to register node" node="pause-170137"
	Oct 01 20:07:43 pause-170137 kubelet[3177]: E1001 20:07:43.105930    3177 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.12:8443: connect: connection refused" node="pause-170137"
	Oct 01 20:07:43 pause-170137 kubelet[3177]: I1001 20:07:43.186684    3177 scope.go:117] "RemoveContainer" containerID="2e0d4a72c290677d3b877779ce453b0e69fa0015617db7f645ad4688395ab996"
	Oct 01 20:07:43 pause-170137 kubelet[3177]: I1001 20:07:43.186797    3177 scope.go:117] "RemoveContainer" containerID="499a59a4cf03309bc4eb2c7fd056ae703281dbbf3731c6a575f38480721834ec"
	Oct 01 20:07:43 pause-170137 kubelet[3177]: E1001 20:07:43.304445    3177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-170137?timeout=10s\": dial tcp 192.168.50.12:8443: connect: connection refused" interval="800ms"
	Oct 01 20:07:43 pause-170137 kubelet[3177]: I1001 20:07:43.507153    3177 kubelet_node_status.go:72] "Attempting to register node" node="pause-170137"
	Oct 01 20:07:45 pause-170137 kubelet[3177]: I1001 20:07:45.669174    3177 kubelet_node_status.go:111] "Node was previously registered" node="pause-170137"
	Oct 01 20:07:45 pause-170137 kubelet[3177]: I1001 20:07:45.669647    3177 kubelet_node_status.go:75] "Successfully registered node" node="pause-170137"
	Oct 01 20:07:45 pause-170137 kubelet[3177]: I1001 20:07:45.669912    3177 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 01 20:07:45 pause-170137 kubelet[3177]: I1001 20:07:45.671071    3177 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 01 20:07:45 pause-170137 kubelet[3177]: I1001 20:07:45.676063    3177 apiserver.go:52] "Watching apiserver"
	Oct 01 20:07:45 pause-170137 kubelet[3177]: I1001 20:07:45.708744    3177 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 01 20:07:45 pause-170137 kubelet[3177]: I1001 20:07:45.796611    3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9579b36d-adb4-4b12-a1de-b318cb62b8a3-lib-modules\") pod \"kube-proxy-ffrj7\" (UID: \"9579b36d-adb4-4b12-a1de-b318cb62b8a3\") " pod="kube-system/kube-proxy-ffrj7"
	Oct 01 20:07:45 pause-170137 kubelet[3177]: I1001 20:07:45.796685    3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9579b36d-adb4-4b12-a1de-b318cb62b8a3-xtables-lock\") pod \"kube-proxy-ffrj7\" (UID: \"9579b36d-adb4-4b12-a1de-b318cb62b8a3\") " pod="kube-system/kube-proxy-ffrj7"
	Oct 01 20:07:45 pause-170137 kubelet[3177]: E1001 20:07:45.881097    3177 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-170137\" already exists" pod="kube-system/kube-apiserver-pause-170137"
	Oct 01 20:07:45 pause-170137 kubelet[3177]: I1001 20:07:45.989615    3177 scope.go:117] "RemoveContainer" containerID="1b218656faf8165138d3e921d3db2d506d35669a650f07074504daff545fa656"
	Oct 01 20:07:52 pause-170137 kubelet[3177]: E1001 20:07:52.814638    3177 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727813272814256751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:07:52 pause-170137 kubelet[3177]: E1001 20:07:52.814662    3177 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727813272814256751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-170137 -n pause-170137
helpers_test.go:261: (dbg) Run:  kubectl --context pause-170137 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-170137 -n pause-170137
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-170137 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-170137 logs -n 25: (1.466465968s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p test-preload-118977         | test-preload-118977      | jenkins | v1.34.0 | 01 Oct 24 20:03 UTC | 01 Oct 24 20:03 UTC |
	| start   | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:03 UTC | 01 Oct 24 20:04 UTC |
	|         | --memory=2048 --driver=kvm2    |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:04 UTC |                     |
	|         | --schedule 5m                  |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:04 UTC |                     |
	|         | --schedule 5m                  |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:04 UTC |                     |
	|         | --schedule 5m                  |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:04 UTC |                     |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:04 UTC |                     |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:04 UTC |                     |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:04 UTC | 01 Oct 24 20:04 UTC |
	|         | --cancel-scheduled             |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:04 UTC |                     |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:04 UTC |                     |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:04 UTC | 01 Oct 24 20:05 UTC |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| delete  | -p scheduled-stop-142421       | scheduled-stop-142421    | jenkins | v1.34.0 | 01 Oct 24 20:05 UTC | 01 Oct 24 20:05 UTC |
	| start   | -p offline-crio-770413         | offline-crio-770413      | jenkins | v1.34.0 | 01 Oct 24 20:05 UTC | 01 Oct 24 20:07 UTC |
	|         | --alsologtostderr              |                          |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                          |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p pause-170137 --memory=2048  | pause-170137             | jenkins | v1.34.0 | 01 Oct 24 20:05 UTC | 01 Oct 24 20:07 UTC |
	|         | --install-addons=false         |                          |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p NoKubernetes-791490         | NoKubernetes-791490      | jenkins | v1.34.0 | 01 Oct 24 20:05 UTC |                     |
	|         | --no-kubernetes                |                          |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                          |         |         |                     |                     |
	|         | --driver=kvm2                  |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p NoKubernetes-791490         | NoKubernetes-791490      | jenkins | v1.34.0 | 01 Oct 24 20:05 UTC | 01 Oct 24 20:07 UTC |
	|         | --driver=kvm2                  |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p running-upgrade-819936      | minikube                 | jenkins | v1.26.0 | 01 Oct 24 20:05 UTC | 01 Oct 24 20:07 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                          |         |         |                     |                     |
	|         |  --container-runtime=crio      |                          |         |         |                     |                     |
	| start   | -p pause-170137                | pause-170137             | jenkins | v1.34.0 | 01 Oct 24 20:07 UTC | 01 Oct 24 20:07 UTC |
	|         | --alsologtostderr              |                          |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| delete  | -p offline-crio-770413         | offline-crio-770413      | jenkins | v1.34.0 | 01 Oct 24 20:07 UTC | 01 Oct 24 20:07 UTC |
	| start   | -p force-systemd-env-528861    | force-systemd-env-528861 | jenkins | v1.34.0 | 01 Oct 24 20:07 UTC |                     |
	|         | --memory=2048                  |                          |         |         |                     |                     |
	|         | --alsologtostderr              |                          |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p NoKubernetes-791490         | NoKubernetes-791490      | jenkins | v1.34.0 | 01 Oct 24 20:07 UTC | 01 Oct 24 20:07 UTC |
	|         | --no-kubernetes --driver=kvm2  |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p running-upgrade-819936      | running-upgrade-819936   | jenkins | v1.34.0 | 01 Oct 24 20:07 UTC |                     |
	|         | --memory=2200                  |                          |         |         |                     |                     |
	|         | --alsologtostderr              |                          |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| delete  | -p NoKubernetes-791490         | NoKubernetes-791490      | jenkins | v1.34.0 | 01 Oct 24 20:07 UTC | 01 Oct 24 20:08 UTC |
	| start   | -p NoKubernetes-791490         | NoKubernetes-791490      | jenkins | v1.34.0 | 01 Oct 24 20:08 UTC |                     |
	|         | --no-kubernetes --driver=kvm2  |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	|---------|--------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 20:08:00
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 20:08:00.806288   56834 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:08:00.806444   56834 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:08:00.806449   56834 out.go:358] Setting ErrFile to fd 2...
	I1001 20:08:00.806454   56834 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:08:00.806747   56834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 20:08:00.807515   56834 out.go:352] Setting JSON to false
	I1001 20:08:00.808833   56834 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6623,"bootTime":1727806658,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 20:08:00.808961   56834 start.go:139] virtualization: kvm guest
	I1001 20:08:00.811016   56834 out.go:177] * [NoKubernetes-791490] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 20:08:00.812108   56834 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 20:08:00.812132   56834 notify.go:220] Checking for updates...
	I1001 20:08:00.814172   56834 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 20:08:00.815486   56834 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:08:00.817063   56834 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:08:00.818280   56834 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 20:08:00.819293   56834 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 20:08:00.821101   56834 config.go:182] Loaded profile config "force-systemd-env-528861": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:08:00.821302   56834 config.go:182] Loaded profile config "pause-170137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:08:00.821420   56834 config.go:182] Loaded profile config "running-upgrade-819936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1001 20:08:00.821440   56834 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1001 20:08:00.821541   56834 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 20:08:00.863266   56834 out.go:177] * Using the kvm2 driver based on user configuration
	I1001 20:08:00.864332   56834 start.go:297] selected driver: kvm2
	I1001 20:08:00.864339   56834 start.go:901] validating driver "kvm2" against <nil>
	I1001 20:08:00.864350   56834 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 20:08:00.864765   56834 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1001 20:08:00.864835   56834 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:08:00.864899   56834 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 20:08:00.882789   56834 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 20:08:00.882853   56834 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 20:08:00.883428   56834 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1001 20:08:00.883597   56834 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 20:08:00.883622   56834 cni.go:84] Creating CNI manager for ""
	I1001 20:08:00.883690   56834 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:08:00.883697   56834 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 20:08:00.883717   56834 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1001 20:08:00.883776   56834 start.go:340] cluster config:
	{Name:NoKubernetes-791490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-791490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:08:00.883922   56834 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:08:00.885412   56834 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-791490
	I1001 20:08:00.886622   56834 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W1001 20:08:01.397759   56834 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1001 20:08:01.397920   56834 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/NoKubernetes-791490/config.json ...
	I1001 20:08:01.397953   56834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/NoKubernetes-791490/config.json: {Name:mk885ebb9f1642122977ddb32d905fab5393067c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:08:01.398089   56834 start.go:360] acquireMachinesLock for NoKubernetes-791490: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 20:08:01.398109   56834 start.go:364] duration metric: took 12.024µs to acquireMachinesLock for "NoKubernetes-791490"
	I1001 20:08:01.398119   56834 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-791490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-791490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 20:08:01.398200   56834 start.go:125] createHost starting for "" (driver="kvm2")
	I1001 20:07:57.428432   56047 out.go:235]   - Booting up control plane ...
	I1001 20:07:57.428585   56047 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:07:57.429424   56047 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:07:57.432287   56047 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:07:57.452497   56047 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:07:57.461860   56047 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:07:57.461941   56047 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:07:57.633406   56047 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 20:07:57.633558   56047 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 20:07:59.134375   56047 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501866764s
	I1001 20:07:59.134475   56047 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	
	
	==> CRI-O <==
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.451571265Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727813282451549501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a623d8c9-c8f8-4d7e-ae3b-b1af77a88b2c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.452082670Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26265a07-6dbf-4450-ab76-96d253381934 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.452160596Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26265a07-6dbf-4450-ab76-96d253381934 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.452461566Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:deedf3824bc9483ca76172a056cfd5164feda30f961e28daeabeaabe04e2c461,PodSandboxId:4c134d4b87189779ca1e5f046dfb42809bf2f2abbe419c6800900bd996f01821,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727813266001396140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ffrj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9579b36d-adb4-4b12-a1de-b318cb62b8a3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0914589df96b483b8ebfe729b2aa7ec0e5c4ad21e54b98dadde189397f0853ea,PodSandboxId:5a5e1a400cb410ad546ba4f8e49985a0baa6f1790ccd87ab9aca831b36a6e3dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727813263199677736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e51170df960e2d5f453b5738c1d025,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dabed2d08e52f4ba9d58733632c843f568a404800e60b3f0f832cd4aab283bc,PodSandboxId:9b55a4c23978acaf90c4390fe582d615987ec18383f8e584152c4bffae4d3151,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727813263210545432,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cd11daa5cdab6a4704ff3a11cdc428,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:956aef47f974a2576213aad53039396b3fd661c67b8377801c455d7c25af4d5a,PodSandboxId:120857355ab86b7130dac7f09e63a36f45ab94cff94b872058650ff83b68986c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727813253323500747,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e6a42ddfc1cb7a814eeb7af718d4bb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d7da33991924efa83fc62d469b68461bb89fd6437148b9dbdcb1e1960617df,PodSandboxId:6768cd3d490fce6181471abc753f441ac37c0df454687299db83648c397db6f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727813251412387497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03cea441204a38481b2802a79896a7c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fbea9ec018f6657691a05a645ab4a14fa248e6538e0aaa6b33f4bcdbc5e8d8,PodSandboxId:78107dc435911602060535be331a2735409bc3b55d403aa8e0cc2533e80f9c38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727813250946779412,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8tqn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42e5352-5fa7-4a31-97a6-13e95b760487,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499a59a4cf03309bc4eb2c7fd056ae703281dbbf3731c6a575f38480721834ec,PodSandboxId:9b55a4c23978acaf90c4390fe582d615987ec18383f8e584152c4bffae4d3151,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727813250097770225,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cd11daa5cdab6a4704ff3a11cdc428,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0d4a72c290677d3b877779ce453b0e69fa0015617db7f645ad4688395ab996,PodSandboxId:5a5e1a400cb410ad546ba4f8e49985a0baa6f1790ccd87ab9aca831b36a6e3dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727813249976897819,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e51170df960e2d5f453b5738c1d025,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b218656faf8165138d3e921d3db2d506d35669a650f07074504daff545fa656,PodSandboxId:4c134d4b87189779ca1e5f046dfb42809bf2f2abbe419c6800900bd996f01821,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727813249757550854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ffrj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9579b36d-adb4-4b12-a1de-b318cb62b8a3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667fbb3e5706931d62ec8064b0e3050f2b38f94dcd31cdef183965f0e1b01717,PodSandboxId:5020e2cbb1fa206129471503a766d62db17a661c3f70d486c66713855ceb4d1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727813216969711621,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8tqn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42e5352-5fa7-4a31-97a6-13e95b760487,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\
":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b60a5977fe573029539badfa21465c04853167aa7eec89399606bf02e954db7,PodSandboxId:60de18f0c7ded2535525e50ed8bbb3976400aecd3e1c5edef038ed7280581f84,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727813204351460001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-170137,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e6a42ddfc1cb7a814eeb7af718d4bb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97bce934e0f09a84c734707360b83dc0263004bd1431ad3aae6e7a0b1174e051,PodSandboxId:44f823e07f6564322d4b75c366c2252179746733b04f417efe785bc22cc3f254,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727813204283638376,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 03cea441204a38481b2802a79896a7c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26265a07-6dbf-4450-ab76-96d253381934 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.506687555Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d77eb67d-371c-4adb-9e30-cc104d775efa name=/runtime.v1.RuntimeService/Version
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.506774562Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d77eb67d-371c-4adb-9e30-cc104d775efa name=/runtime.v1.RuntimeService/Version
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.508472271Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b93fd2eb-cf77-4114-8e5e-fce2b59f9f40 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.508903483Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727813282508877621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b93fd2eb-cf77-4114-8e5e-fce2b59f9f40 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.509440904Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ffde950-75ee-44f2-9272-6b986d5decb5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.509567858Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ffde950-75ee-44f2-9272-6b986d5decb5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.509862547Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:deedf3824bc9483ca76172a056cfd5164feda30f961e28daeabeaabe04e2c461,PodSandboxId:4c134d4b87189779ca1e5f046dfb42809bf2f2abbe419c6800900bd996f01821,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727813266001396140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ffrj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9579b36d-adb4-4b12-a1de-b318cb62b8a3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0914589df96b483b8ebfe729b2aa7ec0e5c4ad21e54b98dadde189397f0853ea,PodSandboxId:5a5e1a400cb410ad546ba4f8e49985a0baa6f1790ccd87ab9aca831b36a6e3dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727813263199677736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e51170df960e2d5f453b5738c1d025,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dabed2d08e52f4ba9d58733632c843f568a404800e60b3f0f832cd4aab283bc,PodSandboxId:9b55a4c23978acaf90c4390fe582d615987ec18383f8e584152c4bffae4d3151,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727813263210545432,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cd11daa5cdab6a4704ff3a11cdc428,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:956aef47f974a2576213aad53039396b3fd661c67b8377801c455d7c25af4d5a,PodSandboxId:120857355ab86b7130dac7f09e63a36f45ab94cff94b872058650ff83b68986c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727813253323500747,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e6a42ddfc1cb7a814eeb7af718d4bb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d7da33991924efa83fc62d469b68461bb89fd6437148b9dbdcb1e1960617df,PodSandboxId:6768cd3d490fce6181471abc753f441ac37c0df454687299db83648c397db6f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727813251412387497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03cea441204a38481b2802a79896a7c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fbea9ec018f6657691a05a645ab4a14fa248e6538e0aaa6b33f4bcdbc5e8d8,PodSandboxId:78107dc435911602060535be331a2735409bc3b55d403aa8e0cc2533e80f9c38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727813250946779412,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8tqn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42e5352-5fa7-4a31-97a6-13e95b760487,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499a59a4cf03309bc4eb2c7fd056ae703281dbbf3731c6a575f38480721834ec,PodSandboxId:9b55a4c23978acaf90c4390fe582d615987ec18383f8e584152c4bffae4d3151,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727813250097770225,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cd11daa5cdab6a4704ff3a11cdc428,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0d4a72c290677d3b877779ce453b0e69fa0015617db7f645ad4688395ab996,PodSandboxId:5a5e1a400cb410ad546ba4f8e49985a0baa6f1790ccd87ab9aca831b36a6e3dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727813249976897819,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e51170df960e2d5f453b5738c1d025,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b218656faf8165138d3e921d3db2d506d35669a650f07074504daff545fa656,PodSandboxId:4c134d4b87189779ca1e5f046dfb42809bf2f2abbe419c6800900bd996f01821,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727813249757550854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ffrj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9579b36d-adb4-4b12-a1de-b318cb62b8a3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667fbb3e5706931d62ec8064b0e3050f2b38f94dcd31cdef183965f0e1b01717,PodSandboxId:5020e2cbb1fa206129471503a766d62db17a661c3f70d486c66713855ceb4d1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727813216969711621,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8tqn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42e5352-5fa7-4a31-97a6-13e95b760487,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\
":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b60a5977fe573029539badfa21465c04853167aa7eec89399606bf02e954db7,PodSandboxId:60de18f0c7ded2535525e50ed8bbb3976400aecd3e1c5edef038ed7280581f84,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727813204351460001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-170137,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e6a42ddfc1cb7a814eeb7af718d4bb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97bce934e0f09a84c734707360b83dc0263004bd1431ad3aae6e7a0b1174e051,PodSandboxId:44f823e07f6564322d4b75c366c2252179746733b04f417efe785bc22cc3f254,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727813204283638376,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 03cea441204a38481b2802a79896a7c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ffde950-75ee-44f2-9272-6b986d5decb5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.567209602Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=41d22b01-6c0a-4c65-bae9-909691206799 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.567319400Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=41d22b01-6c0a-4c65-bae9-909691206799 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.568789805Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=52c62421-f7f4-4190-bb41-72d58e1aba2b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.569628707Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727813282569591243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=52c62421-f7f4-4190-bb41-72d58e1aba2b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.570379447Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f02bde0-1152-4ff4-938e-cdafb107f2ea name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.570474134Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f02bde0-1152-4ff4-938e-cdafb107f2ea name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.571816235Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:deedf3824bc9483ca76172a056cfd5164feda30f961e28daeabeaabe04e2c461,PodSandboxId:4c134d4b87189779ca1e5f046dfb42809bf2f2abbe419c6800900bd996f01821,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727813266001396140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ffrj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9579b36d-adb4-4b12-a1de-b318cb62b8a3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0914589df96b483b8ebfe729b2aa7ec0e5c4ad21e54b98dadde189397f0853ea,PodSandboxId:5a5e1a400cb410ad546ba4f8e49985a0baa6f1790ccd87ab9aca831b36a6e3dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727813263199677736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e51170df960e2d5f453b5738c1d025,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dabed2d08e52f4ba9d58733632c843f568a404800e60b3f0f832cd4aab283bc,PodSandboxId:9b55a4c23978acaf90c4390fe582d615987ec18383f8e584152c4bffae4d3151,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727813263210545432,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cd11daa5cdab6a4704ff3a11cdc428,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:956aef47f974a2576213aad53039396b3fd661c67b8377801c455d7c25af4d5a,PodSandboxId:120857355ab86b7130dac7f09e63a36f45ab94cff94b872058650ff83b68986c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727813253323500747,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e6a42ddfc1cb7a814eeb7af718d4bb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d7da33991924efa83fc62d469b68461bb89fd6437148b9dbdcb1e1960617df,PodSandboxId:6768cd3d490fce6181471abc753f441ac37c0df454687299db83648c397db6f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727813251412387497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03cea441204a38481b2802a79896a7c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fbea9ec018f6657691a05a645ab4a14fa248e6538e0aaa6b33f4bcdbc5e8d8,PodSandboxId:78107dc435911602060535be331a2735409bc3b55d403aa8e0cc2533e80f9c38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727813250946779412,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8tqn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42e5352-5fa7-4a31-97a6-13e95b760487,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499a59a4cf03309bc4eb2c7fd056ae703281dbbf3731c6a575f38480721834ec,PodSandboxId:9b55a4c23978acaf90c4390fe582d615987ec18383f8e584152c4bffae4d3151,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727813250097770225,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cd11daa5cdab6a4704ff3a11cdc428,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0d4a72c290677d3b877779ce453b0e69fa0015617db7f645ad4688395ab996,PodSandboxId:5a5e1a400cb410ad546ba4f8e49985a0baa6f1790ccd87ab9aca831b36a6e3dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727813249976897819,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e51170df960e2d5f453b5738c1d025,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b218656faf8165138d3e921d3db2d506d35669a650f07074504daff545fa656,PodSandboxId:4c134d4b87189779ca1e5f046dfb42809bf2f2abbe419c6800900bd996f01821,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727813249757550854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ffrj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9579b36d-adb4-4b12-a1de-b318cb62b8a3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667fbb3e5706931d62ec8064b0e3050f2b38f94dcd31cdef183965f0e1b01717,PodSandboxId:5020e2cbb1fa206129471503a766d62db17a661c3f70d486c66713855ceb4d1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727813216969711621,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8tqn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42e5352-5fa7-4a31-97a6-13e95b760487,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\
":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b60a5977fe573029539badfa21465c04853167aa7eec89399606bf02e954db7,PodSandboxId:60de18f0c7ded2535525e50ed8bbb3976400aecd3e1c5edef038ed7280581f84,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727813204351460001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-170137,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e6a42ddfc1cb7a814eeb7af718d4bb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97bce934e0f09a84c734707360b83dc0263004bd1431ad3aae6e7a0b1174e051,PodSandboxId:44f823e07f6564322d4b75c366c2252179746733b04f417efe785bc22cc3f254,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727813204283638376,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 03cea441204a38481b2802a79896a7c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f02bde0-1152-4ff4-938e-cdafb107f2ea name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.623165997Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2a760e8-30bb-4894-874f-9b75f5a6bce1 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.623300529Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2a760e8-30bb-4894-874f-9b75f5a6bce1 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.624753057Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca9713ba-d171-4926-b5ec-f84fdd5e34cd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.625497957Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727813282625458507,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca9713ba-d171-4926-b5ec-f84fdd5e34cd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.626277612Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7367f1cb-a7a1-4577-83ed-7f4bd5e8a7e5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.626350958Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7367f1cb-a7a1-4577-83ed-7f4bd5e8a7e5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:08:02 pause-170137 crio[2234]: time="2024-10-01 20:08:02.626624125Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:deedf3824bc9483ca76172a056cfd5164feda30f961e28daeabeaabe04e2c461,PodSandboxId:4c134d4b87189779ca1e5f046dfb42809bf2f2abbe419c6800900bd996f01821,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727813266001396140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ffrj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9579b36d-adb4-4b12-a1de-b318cb62b8a3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0914589df96b483b8ebfe729b2aa7ec0e5c4ad21e54b98dadde189397f0853ea,PodSandboxId:5a5e1a400cb410ad546ba4f8e49985a0baa6f1790ccd87ab9aca831b36a6e3dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727813263199677736,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e51170df960e2d5f453b5738c1d025,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dabed2d08e52f4ba9d58733632c843f568a404800e60b3f0f832cd4aab283bc,PodSandboxId:9b55a4c23978acaf90c4390fe582d615987ec18383f8e584152c4bffae4d3151,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727813263210545432,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cd11daa5cdab6a4704ff3a11cdc428,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:956aef47f974a2576213aad53039396b3fd661c67b8377801c455d7c25af4d5a,PodSandboxId:120857355ab86b7130dac7f09e63a36f45ab94cff94b872058650ff83b68986c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727813253323500747,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e6a42ddfc1cb7a814eeb7af718d4bb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d7da33991924efa83fc62d469b68461bb89fd6437148b9dbdcb1e1960617df,PodSandboxId:6768cd3d490fce6181471abc753f441ac37c0df454687299db83648c397db6f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727813251412387497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03cea441204a38481b2802a79896a7c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fbea9ec018f6657691a05a645ab4a14fa248e6538e0aaa6b33f4bcdbc5e8d8,PodSandboxId:78107dc435911602060535be331a2735409bc3b55d403aa8e0cc2533e80f9c38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727813250946779412,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8tqn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42e5352-5fa7-4a31-97a6-13e95b760487,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499a59a4cf03309bc4eb2c7fd056ae703281dbbf3731c6a575f38480721834ec,PodSandboxId:9b55a4c23978acaf90c4390fe582d615987ec18383f8e584152c4bffae4d3151,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727813250097770225,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cd11daa5cdab6a4704ff3a11cdc428,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0d4a72c290677d3b877779ce453b0e69fa0015617db7f645ad4688395ab996,PodSandboxId:5a5e1a400cb410ad546ba4f8e49985a0baa6f1790ccd87ab9aca831b36a6e3dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727813249976897819,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29e51170df960e2d5f453b5738c1d025,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b218656faf8165138d3e921d3db2d506d35669a650f07074504daff545fa656,PodSandboxId:4c134d4b87189779ca1e5f046dfb42809bf2f2abbe419c6800900bd996f01821,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727813249757550854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ffrj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9579b36d-adb4-4b12-a1de-b318cb62b8a3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667fbb3e5706931d62ec8064b0e3050f2b38f94dcd31cdef183965f0e1b01717,PodSandboxId:5020e2cbb1fa206129471503a766d62db17a661c3f70d486c66713855ceb4d1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727813216969711621,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8tqn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42e5352-5fa7-4a31-97a6-13e95b760487,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\
":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b60a5977fe573029539badfa21465c04853167aa7eec89399606bf02e954db7,PodSandboxId:60de18f0c7ded2535525e50ed8bbb3976400aecd3e1c5edef038ed7280581f84,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727813204351460001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-170137,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e6a42ddfc1cb7a814eeb7af718d4bb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97bce934e0f09a84c734707360b83dc0263004bd1431ad3aae6e7a0b1174e051,PodSandboxId:44f823e07f6564322d4b75c366c2252179746733b04f417efe785bc22cc3f254,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727813204283638376,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-170137,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 03cea441204a38481b2802a79896a7c8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7367f1cb-a7a1-4577-83ed-7f4bd5e8a7e5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	deedf3824bc94       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   16 seconds ago       Running             kube-proxy                2                   4c134d4b87189       kube-proxy-ffrj7
	4dabed2d08e52       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   19 seconds ago       Running             kube-apiserver            2                   9b55a4c23978a       kube-apiserver-pause-170137
	0914589df96b4       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   19 seconds ago       Running             kube-controller-manager   2                   5a5e1a400cb41       kube-controller-manager-pause-170137
	956aef47f974a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   29 seconds ago       Running             kube-scheduler            1                   120857355ab86       kube-scheduler-pause-170137
	60d7da3399192       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   31 seconds ago       Running             etcd                      1                   6768cd3d490fc       etcd-pause-170137
	54fbea9ec018f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   31 seconds ago       Running             coredns                   1                   78107dc435911       coredns-7c65d6cfc9-8tqn8
	499a59a4cf033       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   32 seconds ago       Exited              kube-apiserver            1                   9b55a4c23978a       kube-apiserver-pause-170137
	2e0d4a72c2906       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   32 seconds ago       Exited              kube-controller-manager   1                   5a5e1a400cb41       kube-controller-manager-pause-170137
	1b218656faf81       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   32 seconds ago       Exited              kube-proxy                1                   4c134d4b87189       kube-proxy-ffrj7
	667fbb3e57069       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   5020e2cbb1fa2       coredns-7c65d6cfc9-8tqn8
	6b60a5977fe57       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   About a minute ago   Exited              kube-scheduler            0                   60de18f0c7ded       kube-scheduler-pause-170137
	97bce934e0f09       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Exited              etcd                      0                   44f823e07f656       etcd-pause-170137
	
	
	==> coredns [54fbea9ec018f6657691a05a645ab4a14fa248e6538e0aaa6b33f4bcdbc5e8d8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:34318 - 52756 "HINFO IN 6913580291352706447.4293852202948120045. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015419442s
	
	
	==> coredns [667fbb3e5706931d62ec8064b0e3050f2b38f94dcd31cdef183965f0e1b01717] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46416 - 14803 "HINFO IN 1179570859722145490.3756129640611081246. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013723578s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-170137
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-170137
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=pause-170137
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T20_06_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 20:06:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-170137
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 20:07:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 20:07:45 +0000   Tue, 01 Oct 2024 20:06:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 20:07:45 +0000   Tue, 01 Oct 2024 20:06:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 20:07:45 +0000   Tue, 01 Oct 2024 20:06:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 20:07:45 +0000   Tue, 01 Oct 2024 20:06:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.12
	  Hostname:    pause-170137
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 e558b5aa64f3466ea34f8de6b68a4a28
	  System UUID:                e558b5aa-64f3-466e-a34f-8de6b68a4a28
	  Boot ID:                    6b66f759-787f-4745-a48e-ce3b5f47c632
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-8tqn8                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     68s
	  kube-system                 etcd-pause-170137                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         72s
	  kube-system                 kube-apiserver-pause-170137             250m (12%)    0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-controller-manager-pause-170137    200m (10%)    0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-proxy-ffrj7                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 kube-scheduler-pause-170137             100m (5%)     0 (0%)      0 (0%)           0 (0%)         72s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 67s                kube-proxy       
	  Normal  Starting                 16s                kube-proxy       
	  Normal  Starting                 26s                kube-proxy       
	  Normal  Starting                 73s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  72s                kubelet          Node pause-170137 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     72s                kubelet          Node pause-170137 status is now: NodeHasSufficientPID
	  Normal  NodeReady                72s                kubelet          Node pause-170137 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    72s                kubelet          Node pause-170137 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           69s                node-controller  Node pause-170137 event: Registered Node pause-170137 in Controller
	  Normal  Starting                 20s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node pause-170137 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node pause-170137 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node pause-170137 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14s                node-controller  Node pause-170137 event: Registered Node pause-170137 in Controller
	
	
	==> dmesg <==
	[ +10.278779] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.063293] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056456] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.173824] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.151371] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.301044] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +4.228110] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +4.325705] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.079350] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.493282] systemd-fstab-generator[1216]: Ignoring "noauto" option for root device
	[  +0.100548] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.815789] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +0.675524] kauditd_printk_skb: 43 callbacks suppressed
	[Oct 1 20:07] systemd-fstab-generator[1995]: Ignoring "noauto" option for root device
	[  +0.100341] kauditd_printk_skb: 49 callbacks suppressed
	[  +0.066940] systemd-fstab-generator[2007]: Ignoring "noauto" option for root device
	[  +0.202415] systemd-fstab-generator[2020]: Ignoring "noauto" option for root device
	[  +0.206490] systemd-fstab-generator[2033]: Ignoring "noauto" option for root device
	[  +0.365726] systemd-fstab-generator[2061]: Ignoring "noauto" option for root device
	[  +1.200534] systemd-fstab-generator[2374]: Ignoring "noauto" option for root device
	[  +4.034647] kauditd_printk_skb: 210 callbacks suppressed
	[  +9.187102] systemd-fstab-generator[3170]: Ignoring "noauto" option for root device
	[  +0.089973] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.606945] kauditd_printk_skb: 40 callbacks suppressed
	[  +7.625093] systemd-fstab-generator[3507]: Ignoring "noauto" option for root device
	
	
	==> etcd [60d7da33991924efa83fc62d469b68461bb89fd6437148b9dbdcb1e1960617df] <==
	{"level":"info","ts":"2024-10-01T20:07:34.414499Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.12:2379"}
	{"level":"info","ts":"2024-10-01T20:07:34.414879Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T20:07:34.416140Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-01T20:07:34.416638Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-01T20:07:34.416688Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-01T20:07:55.332473Z","caller":"traceutil/trace.go:171","msg":"trace[208550121] linearizableReadLoop","detail":"{readStateIndex:494; appliedIndex:493; }","duration":"171.73869ms","start":"2024-10-01T20:07:55.160715Z","end":"2024-10-01T20:07:55.332454Z","steps":["trace[208550121] 'read index received'  (duration: 171.522596ms)","trace[208550121] 'applied index is now lower than readState.Index'  (duration: 215.255µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-01T20:07:55.332677Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.901799ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-170137\" ","response":"range_response_count:1 size:6990"}
	{"level":"info","ts":"2024-10-01T20:07:55.332773Z","caller":"traceutil/trace.go:171","msg":"trace[1002688057] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-170137; range_end:; response_count:1; response_revision:459; }","duration":"172.048635ms","start":"2024-10-01T20:07:55.160711Z","end":"2024-10-01T20:07:55.332760Z","steps":["trace[1002688057] 'agreement among raft nodes before linearized reading'  (duration: 171.834832ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T20:07:55.332965Z","caller":"traceutil/trace.go:171","msg":"trace[1612129183] transaction","detail":"{read_only:false; response_revision:459; number_of_response:1; }","duration":"181.03083ms","start":"2024-10-01T20:07:55.151924Z","end":"2024-10-01T20:07:55.332955Z","steps":["trace[1612129183] 'process raft request'  (duration: 180.404731ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:07:56.252332Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.533153ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T20:07:56.252469Z","caller":"traceutil/trace.go:171","msg":"trace[172454282] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:459; }","duration":"236.685685ms","start":"2024-10-01T20:07:56.015761Z","end":"2024-10-01T20:07:56.252447Z","steps":["trace[172454282] 'range keys from in-memory index tree'  (duration: 236.521045ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:07:56.252908Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"330.50603ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T20:07:56.252970Z","caller":"traceutil/trace.go:171","msg":"trace[654663731] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:459; }","duration":"330.576683ms","start":"2024-10-01T20:07:55.922386Z","end":"2024-10-01T20:07:56.252962Z","steps":["trace[654663731] 'range keys from in-memory index tree'  (duration: 330.497346ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:07:56.254028Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"475.55626ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5484982241399669423 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-170137\" mod_revision:459 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-170137\" value_size:6721 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-170137\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-01T20:07:56.254263Z","caller":"traceutil/trace.go:171","msg":"trace[1232195808] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"380.251731ms","start":"2024-10-01T20:07:55.874003Z","end":"2024-10-01T20:07:56.254255Z","steps":["trace[1232195808] 'process raft request'  (duration: 380.226044ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:07:56.254656Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T20:07:55.873981Z","time spent":"380.578364ms","remote":"127.0.0.1:57874","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-mtruts7gaawo33qz7nqe3y5c4a\" mod_revision:397 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-mtruts7gaawo33qz7nqe3y5c4a\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-mtruts7gaawo33qz7nqe3y5c4a\" > >"}
	{"level":"info","ts":"2024-10-01T20:07:56.256041Z","caller":"traceutil/trace.go:171","msg":"trace[2038229035] transaction","detail":"{read_only:false; response_revision:461; number_of_response:1; }","duration":"387.378927ms","start":"2024-10-01T20:07:55.868650Z","end":"2024-10-01T20:07:56.256029Z","steps":["trace[2038229035] 'process raft request'  (duration: 385.502576ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T20:07:56.256223Z","caller":"traceutil/trace.go:171","msg":"trace[49797945] linearizableReadLoop","detail":"{readStateIndex:495; appliedIndex:494; }","duration":"594.431371ms","start":"2024-10-01T20:07:55.660984Z","end":"2024-10-01T20:07:56.255415Z","steps":["trace[49797945] 'read index received'  (duration: 117.079071ms)","trace[49797945] 'applied index is now lower than readState.Index'  (duration: 477.350695ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-01T20:07:56.256327Z","caller":"traceutil/trace.go:171","msg":"trace[424267893] transaction","detail":"{read_only:false; response_revision:460; number_of_response:1; }","duration":"912.816707ms","start":"2024-10-01T20:07:55.343493Z","end":"2024-10-01T20:07:56.256309Z","steps":["trace[424267893] 'process raft request'  (duration: 434.512998ms)","trace[424267893] 'compare'  (duration: 474.598753ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-01T20:07:56.256393Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T20:07:55.343476Z","time spent":"912.880504ms","remote":"127.0.0.1:57790","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6783,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-170137\" mod_revision:459 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-170137\" value_size:6721 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-170137\" > >"}
	{"level":"warn","ts":"2024-10-01T20:07:56.256515Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"595.57418ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-170137\" ","response":"range_response_count:1 size:6798"}
	{"level":"info","ts":"2024-10-01T20:07:56.256551Z","caller":"traceutil/trace.go:171","msg":"trace[2100135488] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-170137; range_end:; response_count:1; response_revision:462; }","duration":"595.607514ms","start":"2024-10-01T20:07:55.660935Z","end":"2024-10-01T20:07:56.256543Z","steps":["trace[2100135488] 'agreement among raft nodes before linearized reading'  (duration: 595.555213ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:07:56.256575Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T20:07:55.660898Z","time spent":"595.671615ms","remote":"127.0.0.1:57790","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":6820,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-pause-170137\" "}
	{"level":"warn","ts":"2024-10-01T20:07:56.256230Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T20:07:55.868626Z","time spent":"387.525773ms","remote":"127.0.0.1:57874","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":537,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-170137\" mod_revision:396 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-170137\" value_size:484 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-170137\" > >"}
	{"level":"info","ts":"2024-10-01T20:07:56.892617Z","caller":"traceutil/trace.go:171","msg":"trace[1622587799] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"178.292665ms","start":"2024-10-01T20:07:56.714284Z","end":"2024-10-01T20:07:56.892576Z","steps":["trace[1622587799] 'process raft request'  (duration: 125.873572ms)","trace[1622587799] 'compare'  (duration: 52.318845ms)"],"step_count":2}
	
	
	==> etcd [97bce934e0f09a84c734707360b83dc0263004bd1431ad3aae6e7a0b1174e051] <==
	{"level":"info","ts":"2024-10-01T20:06:45.204446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3a3129d9bdcf4c1e became leader at term 2"}
	{"level":"info","ts":"2024-10-01T20:06:45.204472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3a3129d9bdcf4c1e elected leader 3a3129d9bdcf4c1e at term 2"}
	{"level":"info","ts":"2024-10-01T20:06:45.215125Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T20:06:45.218330Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3a3129d9bdcf4c1e","local-member-attributes":"{Name:pause-170137 ClientURLs:[https://192.168.50.12:2379]}","request-path":"/0/members/3a3129d9bdcf4c1e/attributes","cluster-id":"53bd91c8d6bcbd47","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-01T20:06:45.218583Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T20:06:45.218715Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T20:06:45.219051Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-01T20:06:45.240026Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-01T20:06:45.219724Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T20:06:45.219945Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"53bd91c8d6bcbd47","local-member-id":"3a3129d9bdcf4c1e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T20:06:45.241251Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T20:06:45.241321Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T20:06:45.236963Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T20:06:45.242162Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-01T20:06:45.245333Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.12:2379"}
	{"level":"info","ts":"2024-10-01T20:07:20.895529Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-01T20:07:20.895699Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-170137","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.12:2380"],"advertise-client-urls":["https://192.168.50.12:2379"]}
	{"level":"warn","ts":"2024-10-01T20:07:20.895926Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-01T20:07:20.896070Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-01T20:07:20.979937Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.12:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-01T20:07:20.980073Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.12:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-01T20:07:20.980390Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3a3129d9bdcf4c1e","current-leader-member-id":"3a3129d9bdcf4c1e"}
	{"level":"info","ts":"2024-10-01T20:07:20.983140Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.12:2380"}
	{"level":"info","ts":"2024-10-01T20:07:20.983260Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.12:2380"}
	{"level":"info","ts":"2024-10-01T20:07:20.983283Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-170137","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.12:2380"],"advertise-client-urls":["https://192.168.50.12:2379"]}
	
	
	==> kernel <==
	 20:08:03 up 1 min,  0 users,  load average: 1.05, 0.41, 0.15
	Linux pause-170137 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [499a59a4cf03309bc4eb2c7fd056ae703281dbbf3731c6a575f38480721834ec] <==
	I1001 20:07:36.116152       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1001 20:07:36.116294       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1001 20:07:36.116367       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I1001 20:07:36.116417       1 controller.go:84] Shutting down OpenAPI AggregationController
	I1001 20:07:36.116471       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I1001 20:07:36.120988       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1001 20:07:36.121061       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1001 20:07:36.121543       1 secure_serving.go:258] Stopped listening on [::]:8443
	I1001 20:07:36.121596       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1001 20:07:36.121736       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1001 20:07:36.124448       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1001 20:07:36.131217       1 controller.go:157] Shutting down quota evaluator
	I1001 20:07:36.131739       1 controller.go:176] quota evaluator worker shutdown
	I1001 20:07:36.131901       1 controller.go:176] quota evaluator worker shutdown
	I1001 20:07:36.131980       1 controller.go:176] quota evaluator worker shutdown
	I1001 20:07:36.132052       1 controller.go:176] quota evaluator worker shutdown
	I1001 20:07:36.132081       1 controller.go:176] quota evaluator worker shutdown
	W1001 20:07:36.848195       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1001 20:07:36.848409       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1001 20:07:37.848128       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1001 20:07:37.848624       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	E1001 20:07:38.847739       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1001 20:07:38.848370       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1001 20:07:39.847586       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1001 20:07:39.847672       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	
	
	==> kube-apiserver [4dabed2d08e52f4ba9d58733632c843f568a404800e60b3f0f832cd4aab283bc] <==
	I1001 20:07:45.571800       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1001 20:07:45.571953       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1001 20:07:45.572164       1 shared_informer.go:320] Caches are synced for configmaps
	I1001 20:07:45.572353       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1001 20:07:45.573501       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1001 20:07:45.573551       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1001 20:07:45.577481       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1001 20:07:45.583120       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1001 20:07:45.584324       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1001 20:07:45.584382       1 aggregator.go:171] initial CRD sync complete...
	I1001 20:07:45.584407       1 autoregister_controller.go:144] Starting autoregister controller
	I1001 20:07:45.584429       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1001 20:07:45.584451       1 cache.go:39] Caches are synced for autoregister controller
	I1001 20:07:45.588494       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1001 20:07:45.588538       1 policy_source.go:224] refreshing policies
	I1001 20:07:45.597333       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1001 20:07:46.371142       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1001 20:07:46.688209       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.12]
	I1001 20:07:46.689648       1 controller.go:615] quota admission added evaluator for: endpoints
	I1001 20:07:46.697333       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1001 20:07:46.962510       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1001 20:07:46.979455       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1001 20:07:47.053597       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1001 20:07:47.097296       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1001 20:07:47.113661       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [0914589df96b483b8ebfe729b2aa7ec0e5c4ad21e54b98dadde189397f0853ea] <==
	I1001 20:07:48.828867       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1001 20:07:48.830032       1 shared_informer.go:320] Caches are synced for expand
	I1001 20:07:48.830097       1 shared_informer.go:320] Caches are synced for crt configmap
	I1001 20:07:48.832420       1 shared_informer.go:320] Caches are synced for job
	I1001 20:07:48.836942       1 shared_informer.go:320] Caches are synced for ephemeral
	I1001 20:07:48.838170       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1001 20:07:48.844617       1 shared_informer.go:320] Caches are synced for GC
	I1001 20:07:48.844714       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1001 20:07:48.848005       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1001 20:07:48.850450       1 shared_informer.go:320] Caches are synced for service account
	I1001 20:07:48.852790       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1001 20:07:48.856386       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1001 20:07:48.856583       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="78.515µs"
	I1001 20:07:48.859767       1 shared_informer.go:320] Caches are synced for taint
	I1001 20:07:48.859917       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1001 20:07:48.860016       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-170137"
	I1001 20:07:48.860129       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1001 20:07:48.938780       1 shared_informer.go:320] Caches are synced for resource quota
	I1001 20:07:48.980998       1 shared_informer.go:320] Caches are synced for disruption
	I1001 20:07:48.982899       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1001 20:07:49.006773       1 shared_informer.go:320] Caches are synced for resource quota
	I1001 20:07:49.030919       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1001 20:07:49.478023       1 shared_informer.go:320] Caches are synced for garbage collector
	I1001 20:07:49.479079       1 shared_informer.go:320] Caches are synced for garbage collector
	I1001 20:07:49.479124       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [2e0d4a72c290677d3b877779ce453b0e69fa0015617db7f645ad4688395ab996] <==
	I1001 20:07:30.870035       1 serving.go:386] Generated self-signed cert in-memory
	I1001 20:07:31.699272       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I1001 20:07:31.699310       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 20:07:31.700794       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1001 20:07:31.700886       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1001 20:07:31.700902       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1001 20:07:31.700913       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-proxy [1b218656faf8165138d3e921d3db2d506d35669a650f07074504daff545fa656] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 20:07:30.766130       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 20:07:35.997082       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.12"]
	E1001 20:07:35.998695       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 20:07:36.058717       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 20:07:36.059307       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 20:07:36.061698       1 server_linux.go:169] "Using iptables Proxier"
	I1001 20:07:36.069699       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 20:07:36.070049       1 server.go:483] "Version info" version="v1.31.1"
	I1001 20:07:36.070075       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 20:07:36.090974       1 config.go:199] "Starting service config controller"
	I1001 20:07:36.091081       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 20:07:36.091819       1 config.go:328] "Starting node config controller"
	I1001 20:07:36.091944       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 20:07:36.092031       1 config.go:105] "Starting endpoint slice config controller"
	I1001 20:07:36.092050       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 20:07:36.192335       1 shared_informer.go:320] Caches are synced for service config
	I1001 20:07:36.192528       1 shared_informer.go:320] Caches are synced for node config
	I1001 20:07:36.192617       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [deedf3824bc9483ca76172a056cfd5164feda30f961e28daeabeaabe04e2c461] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 20:07:46.188533       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 20:07:46.198165       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.12"]
	E1001 20:07:46.198257       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 20:07:46.246206       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 20:07:46.246281       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 20:07:46.246316       1 server_linux.go:169] "Using iptables Proxier"
	I1001 20:07:46.249594       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 20:07:46.249877       1 server.go:483] "Version info" version="v1.31.1"
	I1001 20:07:46.249903       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 20:07:46.252419       1 config.go:199] "Starting service config controller"
	I1001 20:07:46.252459       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 20:07:46.252486       1 config.go:105] "Starting endpoint slice config controller"
	I1001 20:07:46.252493       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 20:07:46.253015       1 config.go:328] "Starting node config controller"
	I1001 20:07:46.253038       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 20:07:46.353338       1 shared_informer.go:320] Caches are synced for node config
	I1001 20:07:46.353395       1 shared_informer.go:320] Caches are synced for service config
	I1001 20:07:46.353433       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6b60a5977fe573029539badfa21465c04853167aa7eec89399606bf02e954db7] <==
	E1001 20:06:47.950264       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1001 20:06:47.984058       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1001 20:06:47.984105       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 20:06:47.995695       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1001 20:06:47.995814       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 20:06:48.052976       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1001 20:06:48.053070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 20:06:48.092032       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 20:06:48.092140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1001 20:06:48.250454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1001 20:06:48.250597       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:06:48.265582       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1001 20:06:48.266543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:06:48.309912       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1001 20:06:48.310048       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:06:48.336233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 20:06:48.336340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:06:48.347534       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1001 20:06:48.347636       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:06:48.389258       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 20:06:48.389380       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1001 20:06:51.103541       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1001 20:07:20.894063       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1001 20:07:20.894232       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1001 20:07:20.894414       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [956aef47f974a2576213aad53039396b3fd661c67b8377801c455d7c25af4d5a] <==
	W1001 20:07:35.892074       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1001 20:07:35.892133       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1001 20:07:35.892165       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1001 20:07:35.977624       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1001 20:07:35.980881       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 20:07:35.984101       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1001 20:07:35.985047       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 20:07:35.987564       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1001 20:07:35.985084       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1001 20:07:36.087868       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1001 20:07:45.394165       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E1001 20:07:45.394247       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E1001 20:07:45.394284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E1001 20:07:45.394332       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E1001 20:07:45.394382       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E1001 20:07:45.394436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E1001 20:07:45.394499       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E1001 20:07:45.394558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E1001 20:07:45.394620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E1001 20:07:45.394651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E1001 20:07:45.394707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E1001 20:07:45.394772       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E1001 20:07:45.394855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E1001 20:07:45.394893       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E1001 20:07:45.489166       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	
	
	==> kubelet <==
	Oct 01 20:07:42 pause-170137 kubelet[3177]: I1001 20:07:42.946119    3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/39cd11daa5cdab6a4704ff3a11cdc428-k8s-certs\") pod \"kube-apiserver-pause-170137\" (UID: \"39cd11daa5cdab6a4704ff3a11cdc428\") " pod="kube-system/kube-apiserver-pause-170137"
	Oct 01 20:07:42 pause-170137 kubelet[3177]: I1001 20:07:42.946155    3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/39cd11daa5cdab6a4704ff3a11cdc428-usr-share-ca-certificates\") pod \"kube-apiserver-pause-170137\" (UID: \"39cd11daa5cdab6a4704ff3a11cdc428\") " pod="kube-system/kube-apiserver-pause-170137"
	Oct 01 20:07:42 pause-170137 kubelet[3177]: I1001 20:07:42.946195    3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29e51170df960e2d5f453b5738c1d025-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-170137\" (UID: \"29e51170df960e2d5f453b5738c1d025\") " pod="kube-system/kube-controller-manager-pause-170137"
	Oct 01 20:07:42 pause-170137 kubelet[3177]: I1001 20:07:42.946263    3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/03cea441204a38481b2802a79896a7c8-etcd-data\") pod \"etcd-pause-170137\" (UID: \"03cea441204a38481b2802a79896a7c8\") " pod="kube-system/etcd-pause-170137"
	Oct 01 20:07:42 pause-170137 kubelet[3177]: I1001 20:07:42.946295    3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29e51170df960e2d5f453b5738c1d025-k8s-certs\") pod \"kube-controller-manager-pause-170137\" (UID: \"29e51170df960e2d5f453b5738c1d025\") " pod="kube-system/kube-controller-manager-pause-170137"
	Oct 01 20:07:43 pause-170137 kubelet[3177]: I1001 20:07:43.105146    3177 kubelet_node_status.go:72] "Attempting to register node" node="pause-170137"
	Oct 01 20:07:43 pause-170137 kubelet[3177]: E1001 20:07:43.105930    3177 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.12:8443: connect: connection refused" node="pause-170137"
	Oct 01 20:07:43 pause-170137 kubelet[3177]: I1001 20:07:43.186684    3177 scope.go:117] "RemoveContainer" containerID="2e0d4a72c290677d3b877779ce453b0e69fa0015617db7f645ad4688395ab996"
	Oct 01 20:07:43 pause-170137 kubelet[3177]: I1001 20:07:43.186797    3177 scope.go:117] "RemoveContainer" containerID="499a59a4cf03309bc4eb2c7fd056ae703281dbbf3731c6a575f38480721834ec"
	Oct 01 20:07:43 pause-170137 kubelet[3177]: E1001 20:07:43.304445    3177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-170137?timeout=10s\": dial tcp 192.168.50.12:8443: connect: connection refused" interval="800ms"
	Oct 01 20:07:43 pause-170137 kubelet[3177]: I1001 20:07:43.507153    3177 kubelet_node_status.go:72] "Attempting to register node" node="pause-170137"
	Oct 01 20:07:45 pause-170137 kubelet[3177]: I1001 20:07:45.669174    3177 kubelet_node_status.go:111] "Node was previously registered" node="pause-170137"
	Oct 01 20:07:45 pause-170137 kubelet[3177]: I1001 20:07:45.669647    3177 kubelet_node_status.go:75] "Successfully registered node" node="pause-170137"
	Oct 01 20:07:45 pause-170137 kubelet[3177]: I1001 20:07:45.669912    3177 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 01 20:07:45 pause-170137 kubelet[3177]: I1001 20:07:45.671071    3177 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 01 20:07:45 pause-170137 kubelet[3177]: I1001 20:07:45.676063    3177 apiserver.go:52] "Watching apiserver"
	Oct 01 20:07:45 pause-170137 kubelet[3177]: I1001 20:07:45.708744    3177 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 01 20:07:45 pause-170137 kubelet[3177]: I1001 20:07:45.796611    3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9579b36d-adb4-4b12-a1de-b318cb62b8a3-lib-modules\") pod \"kube-proxy-ffrj7\" (UID: \"9579b36d-adb4-4b12-a1de-b318cb62b8a3\") " pod="kube-system/kube-proxy-ffrj7"
	Oct 01 20:07:45 pause-170137 kubelet[3177]: I1001 20:07:45.796685    3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9579b36d-adb4-4b12-a1de-b318cb62b8a3-xtables-lock\") pod \"kube-proxy-ffrj7\" (UID: \"9579b36d-adb4-4b12-a1de-b318cb62b8a3\") " pod="kube-system/kube-proxy-ffrj7"
	Oct 01 20:07:45 pause-170137 kubelet[3177]: E1001 20:07:45.881097    3177 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-170137\" already exists" pod="kube-system/kube-apiserver-pause-170137"
	Oct 01 20:07:45 pause-170137 kubelet[3177]: I1001 20:07:45.989615    3177 scope.go:117] "RemoveContainer" containerID="1b218656faf8165138d3e921d3db2d506d35669a650f07074504daff545fa656"
	Oct 01 20:07:52 pause-170137 kubelet[3177]: E1001 20:07:52.814638    3177 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727813272814256751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:07:52 pause-170137 kubelet[3177]: E1001 20:07:52.814662    3177 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727813272814256751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:08:02 pause-170137 kubelet[3177]: E1001 20:08:02.817105    3177 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727813282816698370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:08:02 pause-170137 kubelet[3177]: E1001 20:08:02.817224    3177 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727813282816698370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-170137 -n pause-170137
helpers_test.go:261: (dbg) Run:  kubectl --context pause-170137 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (59.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (300.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-359369 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-359369 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m0.208806226s)

                                                
                                                
-- stdout --
	* [old-k8s-version-359369] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-359369" primary control-plane node in "old-k8s-version-359369" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 20:10:16.990743   61419 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:10:16.990885   61419 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:10:16.990896   61419 out.go:358] Setting ErrFile to fd 2...
	I1001 20:10:16.990902   61419 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:10:16.991081   61419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 20:10:16.991677   61419 out.go:352] Setting JSON to false
	I1001 20:10:16.992657   61419 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6759,"bootTime":1727806658,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 20:10:16.992777   61419 start.go:139] virtualization: kvm guest
	I1001 20:10:16.994741   61419 out.go:177] * [old-k8s-version-359369] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 20:10:16.995768   61419 notify.go:220] Checking for updates...
	I1001 20:10:16.995780   61419 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 20:10:16.996991   61419 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 20:10:16.998131   61419 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:10:16.999239   61419 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:10:17.000294   61419 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 20:10:17.001491   61419 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 20:10:17.002966   61419 config.go:182] Loaded profile config "cert-expiration-402897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:10:17.003057   61419 config.go:182] Loaded profile config "kubernetes-upgrade-869396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1001 20:10:17.003134   61419 config.go:182] Loaded profile config "stopped-upgrade-042095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1001 20:10:17.003201   61419 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 20:10:17.041402   61419 out.go:177] * Using the kvm2 driver based on user configuration
	I1001 20:10:17.042466   61419 start.go:297] selected driver: kvm2
	I1001 20:10:17.042485   61419 start.go:901] validating driver "kvm2" against <nil>
	I1001 20:10:17.042497   61419 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 20:10:17.043203   61419 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:10:17.043270   61419 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 20:10:17.059493   61419 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 20:10:17.059554   61419 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 20:10:17.059838   61419 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:10:17.059873   61419 cni.go:84] Creating CNI manager for ""
	I1001 20:10:17.059926   61419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:10:17.059933   61419 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 20:10:17.059984   61419 start.go:340] cluster config:
	{Name:old-k8s-version-359369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-359369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:10:17.060087   61419 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:10:17.061808   61419 out.go:177] * Starting "old-k8s-version-359369" primary control-plane node in "old-k8s-version-359369" cluster
	I1001 20:10:17.062950   61419 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1001 20:10:17.062997   61419 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1001 20:10:17.063005   61419 cache.go:56] Caching tarball of preloaded images
	I1001 20:10:17.063129   61419 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 20:10:17.063142   61419 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1001 20:10:17.063237   61419 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/config.json ...
	I1001 20:10:17.063256   61419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/config.json: {Name:mk1f2ddecaa3fea948ed81e63e9bca368b449e72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:10:17.063415   61419 start.go:360] acquireMachinesLock for old-k8s-version-359369: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 20:10:46.624764   61419 start.go:364] duration metric: took 29.56131886s to acquireMachinesLock for "old-k8s-version-359369"
	I1001 20:10:46.624864   61419 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-359369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-359369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 20:10:46.625039   61419 start.go:125] createHost starting for "" (driver="kvm2")
	I1001 20:10:46.627866   61419 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 20:10:46.628087   61419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:10:46.628143   61419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:10:46.644867   61419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39989
	I1001 20:10:46.645394   61419 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:10:46.646029   61419 main.go:141] libmachine: Using API Version  1
	I1001 20:10:46.646057   61419 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:10:46.646380   61419 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:10:46.646589   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetMachineName
	I1001 20:10:46.646745   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .DriverName
	I1001 20:10:46.646895   61419 start.go:159] libmachine.API.Create for "old-k8s-version-359369" (driver="kvm2")
	I1001 20:10:46.646929   61419 client.go:168] LocalClient.Create starting
	I1001 20:10:46.646967   61419 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem
	I1001 20:10:46.647017   61419 main.go:141] libmachine: Decoding PEM data...
	I1001 20:10:46.647045   61419 main.go:141] libmachine: Parsing certificate...
	I1001 20:10:46.647109   61419 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem
	I1001 20:10:46.647133   61419 main.go:141] libmachine: Decoding PEM data...
	I1001 20:10:46.647152   61419 main.go:141] libmachine: Parsing certificate...
	I1001 20:10:46.647177   61419 main.go:141] libmachine: Running pre-create checks...
	I1001 20:10:46.647188   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .PreCreateCheck
	I1001 20:10:46.647646   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetConfigRaw
	I1001 20:10:46.648074   61419 main.go:141] libmachine: Creating machine...
	I1001 20:10:46.648090   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .Create
	I1001 20:10:46.648249   61419 main.go:141] libmachine: (old-k8s-version-359369) Creating KVM machine...
	I1001 20:10:46.649552   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | found existing default KVM network
	I1001 20:10:46.651002   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:10:46.650800   61760 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ef:4b:e9} reservation:<nil>}
	I1001 20:10:46.652097   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:10:46.651987   61760 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:ec:fe:9e} reservation:<nil>}
	I1001 20:10:46.653331   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:10:46.653199   61760 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:16:fc:34} reservation:<nil>}
	I1001 20:10:46.654455   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:10:46.654281   61760 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003891b0}
	I1001 20:10:46.654490   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | created network xml: 
	I1001 20:10:46.654504   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | <network>
	I1001 20:10:46.654518   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG |   <name>mk-old-k8s-version-359369</name>
	I1001 20:10:46.654532   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG |   <dns enable='no'/>
	I1001 20:10:46.654541   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG |   
	I1001 20:10:46.654548   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1001 20:10:46.654558   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG |     <dhcp>
	I1001 20:10:46.654568   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1001 20:10:46.654583   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG |     </dhcp>
	I1001 20:10:46.654596   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG |   </ip>
	I1001 20:10:46.654610   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG |   
	I1001 20:10:46.654633   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | </network>
	I1001 20:10:46.654643   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | 
	I1001 20:10:46.660765   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | trying to create private KVM network mk-old-k8s-version-359369 192.168.72.0/24...
	I1001 20:10:46.737614   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | private KVM network mk-old-k8s-version-359369 192.168.72.0/24 created
	I1001 20:10:46.737657   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:10:46.737604   61760 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:10:46.737687   61419 main.go:141] libmachine: (old-k8s-version-359369) Setting up store path in /home/jenkins/minikube-integration/19736-11198/.minikube/machines/old-k8s-version-359369 ...
	I1001 20:10:46.737701   61419 main.go:141] libmachine: (old-k8s-version-359369) Building disk image from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 20:10:46.737721   61419 main.go:141] libmachine: (old-k8s-version-359369) Downloading /home/jenkins/minikube-integration/19736-11198/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 20:10:46.976200   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:10:46.976038   61760 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/old-k8s-version-359369/id_rsa...
	I1001 20:10:47.291562   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:10:47.291358   61760 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/old-k8s-version-359369/old-k8s-version-359369.rawdisk...
	I1001 20:10:47.291601   61419 main.go:141] libmachine: (old-k8s-version-359369) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/old-k8s-version-359369 (perms=drwx------)
	I1001 20:10:47.291612   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | Writing magic tar header
	I1001 20:10:47.291625   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | Writing SSH key tar header
	I1001 20:10:47.291637   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:10:47.291472   61760 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/old-k8s-version-359369 ...
	I1001 20:10:47.291658   61419 main.go:141] libmachine: (old-k8s-version-359369) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines (perms=drwxr-xr-x)
	I1001 20:10:47.291675   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/old-k8s-version-359369
	I1001 20:10:47.291685   61419 main.go:141] libmachine: (old-k8s-version-359369) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube (perms=drwxr-xr-x)
	I1001 20:10:47.291700   61419 main.go:141] libmachine: (old-k8s-version-359369) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198 (perms=drwxrwxr-x)
	I1001 20:10:47.291712   61419 main.go:141] libmachine: (old-k8s-version-359369) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 20:10:47.291721   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines
	I1001 20:10:47.291753   61419 main.go:141] libmachine: (old-k8s-version-359369) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 20:10:47.291789   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:10:47.291802   61419 main.go:141] libmachine: (old-k8s-version-359369) Creating domain...
	I1001 20:10:47.291819   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198
	I1001 20:10:47.291830   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 20:10:47.291844   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | Checking permissions on dir: /home/jenkins
	I1001 20:10:47.291854   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | Checking permissions on dir: /home
	I1001 20:10:47.291865   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | Skipping /home - not owner
	I1001 20:10:47.292828   61419 main.go:141] libmachine: (old-k8s-version-359369) define libvirt domain using xml: 
	I1001 20:10:47.292869   61419 main.go:141] libmachine: (old-k8s-version-359369) <domain type='kvm'>
	I1001 20:10:47.292881   61419 main.go:141] libmachine: (old-k8s-version-359369)   <name>old-k8s-version-359369</name>
	I1001 20:10:47.292892   61419 main.go:141] libmachine: (old-k8s-version-359369)   <memory unit='MiB'>2200</memory>
	I1001 20:10:47.292903   61419 main.go:141] libmachine: (old-k8s-version-359369)   <vcpu>2</vcpu>
	I1001 20:10:47.292913   61419 main.go:141] libmachine: (old-k8s-version-359369)   <features>
	I1001 20:10:47.292925   61419 main.go:141] libmachine: (old-k8s-version-359369)     <acpi/>
	I1001 20:10:47.292934   61419 main.go:141] libmachine: (old-k8s-version-359369)     <apic/>
	I1001 20:10:47.292951   61419 main.go:141] libmachine: (old-k8s-version-359369)     <pae/>
	I1001 20:10:47.292963   61419 main.go:141] libmachine: (old-k8s-version-359369)     
	I1001 20:10:47.292969   61419 main.go:141] libmachine: (old-k8s-version-359369)   </features>
	I1001 20:10:47.292974   61419 main.go:141] libmachine: (old-k8s-version-359369)   <cpu mode='host-passthrough'>
	I1001 20:10:47.292981   61419 main.go:141] libmachine: (old-k8s-version-359369)   
	I1001 20:10:47.292994   61419 main.go:141] libmachine: (old-k8s-version-359369)   </cpu>
	I1001 20:10:47.293005   61419 main.go:141] libmachine: (old-k8s-version-359369)   <os>
	I1001 20:10:47.293016   61419 main.go:141] libmachine: (old-k8s-version-359369)     <type>hvm</type>
	I1001 20:10:47.293028   61419 main.go:141] libmachine: (old-k8s-version-359369)     <boot dev='cdrom'/>
	I1001 20:10:47.293036   61419 main.go:141] libmachine: (old-k8s-version-359369)     <boot dev='hd'/>
	I1001 20:10:47.293044   61419 main.go:141] libmachine: (old-k8s-version-359369)     <bootmenu enable='no'/>
	I1001 20:10:47.293048   61419 main.go:141] libmachine: (old-k8s-version-359369)   </os>
	I1001 20:10:47.293055   61419 main.go:141] libmachine: (old-k8s-version-359369)   <devices>
	I1001 20:10:47.293060   61419 main.go:141] libmachine: (old-k8s-version-359369)     <disk type='file' device='cdrom'>
	I1001 20:10:47.293082   61419 main.go:141] libmachine: (old-k8s-version-359369)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/old-k8s-version-359369/boot2docker.iso'/>
	I1001 20:10:47.293095   61419 main.go:141] libmachine: (old-k8s-version-359369)       <target dev='hdc' bus='scsi'/>
	I1001 20:10:47.293111   61419 main.go:141] libmachine: (old-k8s-version-359369)       <readonly/>
	I1001 20:10:47.293126   61419 main.go:141] libmachine: (old-k8s-version-359369)     </disk>
	I1001 20:10:47.293139   61419 main.go:141] libmachine: (old-k8s-version-359369)     <disk type='file' device='disk'>
	I1001 20:10:47.293151   61419 main.go:141] libmachine: (old-k8s-version-359369)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 20:10:47.293167   61419 main.go:141] libmachine: (old-k8s-version-359369)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/old-k8s-version-359369/old-k8s-version-359369.rawdisk'/>
	I1001 20:10:47.293174   61419 main.go:141] libmachine: (old-k8s-version-359369)       <target dev='hda' bus='virtio'/>
	I1001 20:10:47.293180   61419 main.go:141] libmachine: (old-k8s-version-359369)     </disk>
	I1001 20:10:47.293186   61419 main.go:141] libmachine: (old-k8s-version-359369)     <interface type='network'>
	I1001 20:10:47.293192   61419 main.go:141] libmachine: (old-k8s-version-359369)       <source network='mk-old-k8s-version-359369'/>
	I1001 20:10:47.293202   61419 main.go:141] libmachine: (old-k8s-version-359369)       <model type='virtio'/>
	I1001 20:10:47.293230   61419 main.go:141] libmachine: (old-k8s-version-359369)     </interface>
	I1001 20:10:47.293253   61419 main.go:141] libmachine: (old-k8s-version-359369)     <interface type='network'>
	I1001 20:10:47.293267   61419 main.go:141] libmachine: (old-k8s-version-359369)       <source network='default'/>
	I1001 20:10:47.293277   61419 main.go:141] libmachine: (old-k8s-version-359369)       <model type='virtio'/>
	I1001 20:10:47.293294   61419 main.go:141] libmachine: (old-k8s-version-359369)     </interface>
	I1001 20:10:47.293318   61419 main.go:141] libmachine: (old-k8s-version-359369)     <serial type='pty'>
	I1001 20:10:47.293328   61419 main.go:141] libmachine: (old-k8s-version-359369)       <target port='0'/>
	I1001 20:10:47.293342   61419 main.go:141] libmachine: (old-k8s-version-359369)     </serial>
	I1001 20:10:47.293354   61419 main.go:141] libmachine: (old-k8s-version-359369)     <console type='pty'>
	I1001 20:10:47.293367   61419 main.go:141] libmachine: (old-k8s-version-359369)       <target type='serial' port='0'/>
	I1001 20:10:47.293376   61419 main.go:141] libmachine: (old-k8s-version-359369)     </console>
	I1001 20:10:47.293387   61419 main.go:141] libmachine: (old-k8s-version-359369)     <rng model='virtio'>
	I1001 20:10:47.293400   61419 main.go:141] libmachine: (old-k8s-version-359369)       <backend model='random'>/dev/random</backend>
	I1001 20:10:47.293410   61419 main.go:141] libmachine: (old-k8s-version-359369)     </rng>
	I1001 20:10:47.293417   61419 main.go:141] libmachine: (old-k8s-version-359369)     
	I1001 20:10:47.293424   61419 main.go:141] libmachine: (old-k8s-version-359369)     
	I1001 20:10:47.293434   61419 main.go:141] libmachine: (old-k8s-version-359369)   </devices>
	I1001 20:10:47.293441   61419 main.go:141] libmachine: (old-k8s-version-359369) </domain>
	I1001 20:10:47.293445   61419 main.go:141] libmachine: (old-k8s-version-359369) 
	I1001 20:10:47.300064   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:57:49:38 in network default
	I1001 20:10:47.300662   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:10:47.300675   61419 main.go:141] libmachine: (old-k8s-version-359369) Ensuring networks are active...
	I1001 20:10:47.301361   61419 main.go:141] libmachine: (old-k8s-version-359369) Ensuring network default is active
	I1001 20:10:47.301755   61419 main.go:141] libmachine: (old-k8s-version-359369) Ensuring network mk-old-k8s-version-359369 is active
	I1001 20:10:47.302302   61419 main.go:141] libmachine: (old-k8s-version-359369) Getting domain xml...
	I1001 20:10:47.303115   61419 main.go:141] libmachine: (old-k8s-version-359369) Creating domain...
	I1001 20:10:48.533116   61419 main.go:141] libmachine: (old-k8s-version-359369) Waiting to get IP...
	I1001 20:10:48.533915   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:10:48.534398   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:10:48.534423   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:10:48.534353   61760 retry.go:31] will retry after 228.860941ms: waiting for machine to come up
	I1001 20:10:48.764933   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:10:48.765387   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:10:48.765409   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:10:48.765339   61760 retry.go:31] will retry after 235.773786ms: waiting for machine to come up
	I1001 20:10:49.002684   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:10:49.003053   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:10:49.003088   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:10:49.003020   61760 retry.go:31] will retry after 373.431722ms: waiting for machine to come up
	I1001 20:10:49.378613   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:10:49.379215   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:10:49.379242   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:10:49.379160   61760 retry.go:31] will retry after 445.222097ms: waiting for machine to come up
	I1001 20:10:49.825541   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:10:49.825997   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:10:49.826022   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:10:49.825950   61760 retry.go:31] will retry after 667.53135ms: waiting for machine to come up
	I1001 20:10:50.494942   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:10:50.495417   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:10:50.495462   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:10:50.495368   61760 retry.go:31] will retry after 617.677622ms: waiting for machine to come up
	I1001 20:10:51.114363   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:10:51.114916   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:10:51.114946   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:10:51.114871   61760 retry.go:31] will retry after 1.127717809s: waiting for machine to come up
	I1001 20:10:52.244839   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:10:52.245369   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:10:52.245397   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:10:52.245352   61760 retry.go:31] will retry after 947.495581ms: waiting for machine to come up
	I1001 20:10:53.194443   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:10:53.195015   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:10:53.195044   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:10:53.194948   61760 retry.go:31] will retry after 1.167890681s: waiting for machine to come up
	I1001 20:10:54.364455   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:10:54.365041   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:10:54.365073   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:10:54.364982   61760 retry.go:31] will retry after 2.125944687s: waiting for machine to come up
	I1001 20:10:56.492825   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:10:56.493438   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:10:56.493487   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:10:56.493367   61760 retry.go:31] will retry after 2.393322957s: waiting for machine to come up
	I1001 20:10:58.889454   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:10:58.890061   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:10:58.890099   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:10:58.890001   61760 retry.go:31] will retry after 3.471737395s: waiting for machine to come up
	I1001 20:11:02.363665   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:02.364374   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:11:02.364411   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:11:02.364298   61760 retry.go:31] will retry after 3.455284265s: waiting for machine to come up
	I1001 20:11:05.820984   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:05.821555   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:11:05.821584   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:11:05.821503   61760 retry.go:31] will retry after 4.940959383s: waiting for machine to come up
	I1001 20:11:10.765808   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:10.766730   61419 main.go:141] libmachine: (old-k8s-version-359369) Found IP for machine: 192.168.72.110
	I1001 20:11:10.766777   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has current primary IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:10.766788   61419 main.go:141] libmachine: (old-k8s-version-359369) Reserving static IP address...
	I1001 20:11:10.767283   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-359369", mac: "52:54:00:b5:7f:54", ip: "192.168.72.110"} in network mk-old-k8s-version-359369
	I1001 20:11:10.867421   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | Getting to WaitForSSH function...
	I1001 20:11:10.867474   61419 main.go:141] libmachine: (old-k8s-version-359369) Reserved static IP address: 192.168.72.110
	I1001 20:11:10.867493   61419 main.go:141] libmachine: (old-k8s-version-359369) Waiting for SSH to be available...
	I1001 20:11:10.871047   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:10.871557   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:11:01 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b5:7f:54}
	I1001 20:11:10.871591   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:10.871723   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | Using SSH client type: external
	I1001 20:11:10.871757   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/old-k8s-version-359369/id_rsa (-rw-------)
	I1001 20:11:10.871825   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/old-k8s-version-359369/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 20:11:10.871855   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | About to run SSH command:
	I1001 20:11:10.871901   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | exit 0
	I1001 20:11:11.004775   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | SSH cmd err, output: <nil>: 
	I1001 20:11:11.005012   61419 main.go:141] libmachine: (old-k8s-version-359369) KVM machine creation complete!
	I1001 20:11:11.005334   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetConfigRaw
	I1001 20:11:11.005969   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .DriverName
	I1001 20:11:11.006198   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .DriverName
	I1001 20:11:11.006363   61419 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 20:11:11.006377   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetState
	I1001 20:11:11.007913   61419 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 20:11:11.007928   61419 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 20:11:11.007934   61419 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 20:11:11.007939   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHHostname
	I1001 20:11:11.010288   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:11.010995   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:11:01 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:11:11.011045   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:11.011182   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHPort
	I1001 20:11:11.011485   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:11:11.011664   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:11:11.011830   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHUsername
	I1001 20:11:11.012187   61419 main.go:141] libmachine: Using SSH client type: native
	I1001 20:11:11.012485   61419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.110 22 <nil> <nil>}
	I1001 20:11:11.012503   61419 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 20:11:11.127901   61419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:11:11.127935   61419 main.go:141] libmachine: Detecting the provisioner...
	I1001 20:11:11.127951   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHHostname
	I1001 20:11:11.132336   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:11.132841   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:11:01 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:11:11.132881   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:11.133192   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHPort
	I1001 20:11:11.133455   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:11:11.133661   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:11:11.133834   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHUsername
	I1001 20:11:11.134045   61419 main.go:141] libmachine: Using SSH client type: native
	I1001 20:11:11.134265   61419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.110 22 <nil> <nil>}
	I1001 20:11:11.134285   61419 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 20:11:11.253573   61419 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 20:11:11.253665   61419 main.go:141] libmachine: found compatible host: buildroot
	I1001 20:11:11.253678   61419 main.go:141] libmachine: Provisioning with buildroot...
	I1001 20:11:11.253696   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetMachineName
	I1001 20:11:11.253941   61419 buildroot.go:166] provisioning hostname "old-k8s-version-359369"
	I1001 20:11:11.253967   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetMachineName
	I1001 20:11:11.254158   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHHostname
	I1001 20:11:11.257355   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:11.257833   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:11:01 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:11:11.257862   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:11.258158   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHPort
	I1001 20:11:11.258349   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:11:11.258535   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:11:11.258739   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHUsername
	I1001 20:11:11.258937   61419 main.go:141] libmachine: Using SSH client type: native
	I1001 20:11:11.259177   61419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.110 22 <nil> <nil>}
	I1001 20:11:11.259190   61419 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-359369 && echo "old-k8s-version-359369" | sudo tee /etc/hostname
	I1001 20:11:11.396712   61419 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-359369
	
	I1001 20:11:11.396745   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHHostname
	I1001 20:11:11.399896   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:11.400384   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:11:01 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:11:11.400429   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:11.400695   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHPort
	I1001 20:11:11.400912   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:11:11.401130   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:11:11.401283   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHUsername
	I1001 20:11:11.401488   61419 main.go:141] libmachine: Using SSH client type: native
	I1001 20:11:11.401730   61419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.110 22 <nil> <nil>}
	I1001 20:11:11.401759   61419 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-359369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-359369/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-359369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 20:11:11.530646   61419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:11:11.530678   61419 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 20:11:11.530702   61419 buildroot.go:174] setting up certificates
	I1001 20:11:11.530717   61419 provision.go:84] configureAuth start
	I1001 20:11:11.530731   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetMachineName
	I1001 20:11:11.531044   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetIP
	I1001 20:11:11.534414   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:11.534851   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:11:01 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:11:11.534882   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:11.535078   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHHostname
	I1001 20:11:11.538017   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:11.538395   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:11:01 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:11:11.538423   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:11.538615   61419 provision.go:143] copyHostCerts
	I1001 20:11:11.538693   61419 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 20:11:11.538709   61419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 20:11:11.538788   61419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 20:11:11.538963   61419 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 20:11:11.538977   61419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 20:11:11.539018   61419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 20:11:11.539120   61419 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 20:11:11.539133   61419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 20:11:11.539163   61419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 20:11:11.539260   61419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-359369 san=[127.0.0.1 192.168.72.110 localhost minikube old-k8s-version-359369]
	I1001 20:11:11.624814   61419 provision.go:177] copyRemoteCerts
	I1001 20:11:11.624866   61419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 20:11:11.624889   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHHostname
	I1001 20:11:11.627391   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:11.627898   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:11:01 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:11:11.627928   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:11.628264   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHPort
	I1001 20:11:11.628488   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:11:11.628693   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHUsername
	I1001 20:11:11.628883   61419 sshutil.go:53] new ssh client: &{IP:192.168.72.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/old-k8s-version-359369/id_rsa Username:docker}
	I1001 20:11:11.714917   61419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 20:11:11.740757   61419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1001 20:11:11.766725   61419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 20:11:11.791365   61419 provision.go:87] duration metric: took 260.63398ms to configureAuth
	I1001 20:11:11.791406   61419 buildroot.go:189] setting minikube options for container-runtime
	I1001 20:11:11.791609   61419 config.go:182] Loaded profile config "old-k8s-version-359369": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1001 20:11:11.791698   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHHostname
	I1001 20:11:11.794673   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:11.795122   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:11:01 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:11:11.795153   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:11.795394   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHPort
	I1001 20:11:11.795601   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:11:11.795826   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:11:11.796032   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHUsername
	I1001 20:11:11.796245   61419 main.go:141] libmachine: Using SSH client type: native
	I1001 20:11:11.796493   61419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.110 22 <nil> <nil>}
	I1001 20:11:11.796519   61419 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 20:11:12.040573   61419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 20:11:12.040597   61419 main.go:141] libmachine: Checking connection to Docker...
	I1001 20:11:12.040606   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetURL
	I1001 20:11:12.042155   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | Using libvirt version 6000000
	I1001 20:11:12.044931   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:12.045380   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:11:01 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:11:12.045407   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:12.045610   61419 main.go:141] libmachine: Docker is up and running!
	I1001 20:11:12.045622   61419 main.go:141] libmachine: Reticulating splines...
	I1001 20:11:12.045630   61419 client.go:171] duration metric: took 25.398690849s to LocalClient.Create
	I1001 20:11:12.045660   61419 start.go:167] duration metric: took 25.39876731s to libmachine.API.Create "old-k8s-version-359369"
	I1001 20:11:12.045671   61419 start.go:293] postStartSetup for "old-k8s-version-359369" (driver="kvm2")
	I1001 20:11:12.045682   61419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 20:11:12.045698   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .DriverName
	I1001 20:11:12.045955   61419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 20:11:12.045986   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHHostname
	I1001 20:11:12.049447   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:12.049859   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:11:01 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:11:12.049891   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:12.050152   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHPort
	I1001 20:11:12.050371   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:11:12.050502   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHUsername
	I1001 20:11:12.050687   61419 sshutil.go:53] new ssh client: &{IP:192.168.72.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/old-k8s-version-359369/id_rsa Username:docker}
	I1001 20:11:12.142476   61419 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 20:11:12.146775   61419 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 20:11:12.146807   61419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 20:11:12.146879   61419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 20:11:12.146967   61419 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 20:11:12.147095   61419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 20:11:12.156732   61419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:11:12.181869   61419 start.go:296] duration metric: took 136.181719ms for postStartSetup
	I1001 20:11:12.181928   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetConfigRaw
	I1001 20:11:12.182522   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetIP
	I1001 20:11:12.185683   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:12.186255   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:11:01 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:11:12.186283   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:12.186598   61419 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/config.json ...
	I1001 20:11:12.186807   61419 start.go:128] duration metric: took 25.561755365s to createHost
	I1001 20:11:12.186831   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHHostname
	I1001 20:11:12.189783   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:12.190132   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:11:01 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:11:12.190168   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:12.190371   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHPort
	I1001 20:11:12.190641   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:11:12.190821   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:11:12.191124   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHUsername
	I1001 20:11:12.191434   61419 main.go:141] libmachine: Using SSH client type: native
	I1001 20:11:12.191712   61419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.110 22 <nil> <nil>}
	I1001 20:11:12.191732   61419 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 20:11:12.318493   61419 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727813472.274081343
	
	I1001 20:11:12.318513   61419 fix.go:216] guest clock: 1727813472.274081343
	I1001 20:11:12.318524   61419 fix.go:229] Guest: 2024-10-01 20:11:12.274081343 +0000 UTC Remote: 2024-10-01 20:11:12.186819237 +0000 UTC m=+55.234038702 (delta=87.262106ms)
	I1001 20:11:12.318572   61419 fix.go:200] guest clock delta is within tolerance: 87.262106ms
	I1001 20:11:12.318579   61419 start.go:83] releasing machines lock for "old-k8s-version-359369", held for 25.693773032s
	I1001 20:11:12.318605   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .DriverName
	I1001 20:11:12.318895   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetIP
	I1001 20:11:12.323235   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:12.323731   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:11:01 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:11:12.323764   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:12.324009   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .DriverName
	I1001 20:11:12.324646   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .DriverName
	I1001 20:11:12.324805   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .DriverName
	I1001 20:11:12.324897   61419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 20:11:12.324945   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHHostname
	I1001 20:11:12.325031   61419 ssh_runner.go:195] Run: cat /version.json
	I1001 20:11:12.325061   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHHostname
	I1001 20:11:12.328119   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:12.328301   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:12.329022   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:11:01 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:11:12.329050   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:12.329062   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHPort
	I1001 20:11:12.329195   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:11:01 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:11:12.329286   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:12.329334   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHPort
	I1001 20:11:12.329518   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:11:12.329576   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:11:12.329690   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHUsername
	I1001 20:11:12.329779   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHUsername
	I1001 20:11:12.329852   61419 sshutil.go:53] new ssh client: &{IP:192.168.72.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/old-k8s-version-359369/id_rsa Username:docker}
	I1001 20:11:12.329905   61419 sshutil.go:53] new ssh client: &{IP:192.168.72.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/old-k8s-version-359369/id_rsa Username:docker}
	I1001 20:11:12.423159   61419 ssh_runner.go:195] Run: systemctl --version
	I1001 20:11:12.462070   61419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 20:11:12.630039   61419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 20:11:12.636593   61419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 20:11:12.636682   61419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 20:11:12.654027   61419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 20:11:12.654060   61419 start.go:495] detecting cgroup driver to use...
	I1001 20:11:12.654135   61419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 20:11:12.675323   61419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 20:11:12.691618   61419 docker.go:217] disabling cri-docker service (if available) ...
	I1001 20:11:12.691707   61419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 20:11:12.707855   61419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 20:11:12.725676   61419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 20:11:12.859748   61419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 20:11:13.003699   61419 docker.go:233] disabling docker service ...
	I1001 20:11:13.003788   61419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 20:11:13.021041   61419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 20:11:13.037258   61419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 20:11:13.187372   61419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 20:11:13.322764   61419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 20:11:13.340508   61419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 20:11:13.360422   61419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1001 20:11:13.360513   61419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:11:13.372146   61419 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 20:11:13.372228   61419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:11:13.384528   61419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:11:13.396128   61419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:11:13.407894   61419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 20:11:13.420195   61419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 20:11:13.431074   61419 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 20:11:13.431156   61419 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 20:11:13.453080   61419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 20:11:13.464116   61419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:11:13.604060   61419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 20:11:13.725525   61419 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 20:11:13.725599   61419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 20:11:13.730492   61419 start.go:563] Will wait 60s for crictl version
	I1001 20:11:13.730566   61419 ssh_runner.go:195] Run: which crictl
	I1001 20:11:13.734312   61419 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 20:11:13.781200   61419 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 20:11:13.781311   61419 ssh_runner.go:195] Run: crio --version
	I1001 20:11:13.816823   61419 ssh_runner.go:195] Run: crio --version
	I1001 20:11:13.851164   61419 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1001 20:11:13.852503   61419 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetIP
	I1001 20:11:13.855910   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:13.856450   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:11:01 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:11:13.856482   61419 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:11:13.856789   61419 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1001 20:11:13.862191   61419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:11:13.878347   61419 kubeadm.go:883] updating cluster {Name:old-k8s-version-359369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:old-k8s-version-359369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.110 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 20:11:13.878511   61419 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1001 20:11:13.878597   61419 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:11:13.910692   61419 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1001 20:11:13.910790   61419 ssh_runner.go:195] Run: which lz4
	I1001 20:11:13.914815   61419 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 20:11:13.918996   61419 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 20:11:13.919030   61419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1001 20:11:15.517273   61419 crio.go:462] duration metric: took 1.602502875s to copy over tarball
	I1001 20:11:15.517355   61419 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 20:11:18.293040   61419 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.775627281s)
	I1001 20:11:18.293075   61419 crio.go:469] duration metric: took 2.775773019s to extract the tarball
	I1001 20:11:18.293086   61419 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 20:11:18.335343   61419 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:11:18.427755   61419 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1001 20:11:18.427778   61419 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1001 20:11:18.427869   61419 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1001 20:11:18.427901   61419 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1001 20:11:18.427900   61419 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1001 20:11:18.427861   61419 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:11:18.427931   61419 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1001 20:11:18.427945   61419 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 20:11:18.427885   61419 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1001 20:11:18.427967   61419 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1001 20:11:18.429583   61419 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1001 20:11:18.429596   61419 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1001 20:11:18.429611   61419 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1001 20:11:18.429592   61419 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:11:18.429624   61419 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1001 20:11:18.429641   61419 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 20:11:18.429667   61419 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1001 20:11:18.429762   61419 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1001 20:11:18.654149   61419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1001 20:11:18.699941   61419 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1001 20:11:18.699991   61419 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1001 20:11:18.700040   61419 ssh_runner.go:195] Run: which crictl
	I1001 20:11:18.704137   61419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1001 20:11:18.713828   61419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1001 20:11:18.723413   61419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1001 20:11:18.748046   61419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 20:11:18.755391   61419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1001 20:11:18.764547   61419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1001 20:11:18.770255   61419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1001 20:11:18.777606   61419 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1001 20:11:18.777648   61419 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1001 20:11:18.777709   61419 ssh_runner.go:195] Run: which crictl
	I1001 20:11:18.831039   61419 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1001 20:11:18.831090   61419 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1001 20:11:18.831139   61419 ssh_runner.go:195] Run: which crictl
	I1001 20:11:18.847477   61419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1001 20:11:18.908458   61419 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1001 20:11:18.908504   61419 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 20:11:18.908505   61419 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1001 20:11:18.908540   61419 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1001 20:11:18.908555   61419 ssh_runner.go:195] Run: which crictl
	I1001 20:11:18.908576   61419 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1001 20:11:18.908629   61419 ssh_runner.go:195] Run: which crictl
	I1001 20:11:18.908634   61419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1001 20:11:18.908548   61419 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1001 20:11:18.908683   61419 ssh_runner.go:195] Run: which crictl
	I1001 20:11:18.908693   61419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1001 20:11:18.908658   61419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1001 20:11:18.939775   61419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 20:11:18.939798   61419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1001 20:11:18.939989   61419 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1001 20:11:18.940029   61419 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1001 20:11:18.940069   61419 ssh_runner.go:195] Run: which crictl
	I1001 20:11:19.022160   61419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1001 20:11:19.022201   61419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1001 20:11:19.022241   61419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1001 20:11:19.022324   61419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1001 20:11:19.059059   61419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1001 20:11:19.065273   61419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 20:11:19.065291   61419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1001 20:11:19.143669   61419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1001 20:11:19.143753   61419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1001 20:11:19.179767   61419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1001 20:11:19.206155   61419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1001 20:11:19.206260   61419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 20:11:19.211810   61419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1001 20:11:19.297089   61419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1001 20:11:19.297097   61419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1001 20:11:19.297239   61419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1001 20:11:19.331092   61419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1001 20:11:19.346924   61419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1001 20:11:19.356911   61419 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1001 20:11:19.386362   61419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1001 20:11:19.410194   61419 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1001 20:11:19.776972   61419 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:11:19.917478   61419 cache_images.go:92] duration metric: took 1.48968065s to LoadCachedImages
	W1001 20:11:19.917611   61419 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1001 20:11:19.917645   61419 kubeadm.go:934] updating node { 192.168.72.110 8443 v1.20.0 crio true true} ...
	I1001 20:11:19.917782   61419 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-359369 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-359369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 20:11:19.917876   61419 ssh_runner.go:195] Run: crio config
	I1001 20:11:19.974720   61419 cni.go:84] Creating CNI manager for ""
	I1001 20:11:19.974751   61419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:11:19.974760   61419 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 20:11:19.974777   61419 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.110 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-359369 NodeName:old-k8s-version-359369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1001 20:11:19.974910   61419 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-359369"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.110
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.110"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 20:11:19.974967   61419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1001 20:11:19.986583   61419 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 20:11:19.986664   61419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 20:11:19.997921   61419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1001 20:11:20.015758   61419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 20:11:20.033559   61419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1001 20:11:20.050982   61419 ssh_runner.go:195] Run: grep 192.168.72.110	control-plane.minikube.internal$ /etc/hosts
	I1001 20:11:20.054892   61419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.110	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:11:20.067616   61419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:11:20.187074   61419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:11:20.208231   61419 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369 for IP: 192.168.72.110
	I1001 20:11:20.208257   61419 certs.go:194] generating shared ca certs ...
	I1001 20:11:20.208303   61419 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:11:20.208514   61419 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 20:11:20.208569   61419 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 20:11:20.208582   61419 certs.go:256] generating profile certs ...
	I1001 20:11:20.208651   61419 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/client.key
	I1001 20:11:20.208675   61419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/client.crt with IP's: []
	I1001 20:11:20.349104   61419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/client.crt ...
	I1001 20:11:20.349132   61419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/client.crt: {Name:mkb34eb8d4f7a4fadbfa48810461e420dcef91dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:11:20.349289   61419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/client.key ...
	I1001 20:11:20.349302   61419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/client.key: {Name:mk4e49290031625ef9a721df80d74141a97eaba9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:11:20.349395   61419 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/apiserver.key.3f76c948
	I1001 20:11:20.349413   61419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/apiserver.crt.3f76c948 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.110]
	I1001 20:11:20.423863   61419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/apiserver.crt.3f76c948 ...
	I1001 20:11:20.423897   61419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/apiserver.crt.3f76c948: {Name:mk550672f66893817c3f14208c35529522b35f62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:11:20.424068   61419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/apiserver.key.3f76c948 ...
	I1001 20:11:20.424095   61419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/apiserver.key.3f76c948: {Name:mkd18df8c164755a9fb6b3e38fd7d51d2fa1f466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:11:20.424197   61419 certs.go:381] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/apiserver.crt.3f76c948 -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/apiserver.crt
	I1001 20:11:20.424313   61419 certs.go:385] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/apiserver.key.3f76c948 -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/apiserver.key
	I1001 20:11:20.424426   61419 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/proxy-client.key
	I1001 20:11:20.424460   61419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/proxy-client.crt with IP's: []
	I1001 20:11:20.649312   61419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/proxy-client.crt ...
	I1001 20:11:20.649342   61419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/proxy-client.crt: {Name:mkd81cc3ac42abf8798b4511f02af56fc8937d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:11:20.649519   61419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/proxy-client.key ...
	I1001 20:11:20.649538   61419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/proxy-client.key: {Name:mk433e69ce46320858e435971c228f2fd6a298cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:11:20.649743   61419 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 20:11:20.649782   61419 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 20:11:20.649792   61419 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 20:11:20.649819   61419 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 20:11:20.649850   61419 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 20:11:20.649883   61419 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 20:11:20.649934   61419 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:11:20.650569   61419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 20:11:20.676965   61419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 20:11:20.703579   61419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 20:11:20.732896   61419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 20:11:20.757858   61419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1001 20:11:20.783006   61419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 20:11:20.809049   61419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 20:11:20.899255   61419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1001 20:11:20.927537   61419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 20:11:20.955845   61419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 20:11:20.982846   61419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 20:11:21.007667   61419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 20:11:21.028185   61419 ssh_runner.go:195] Run: openssl version
	I1001 20:11:21.040501   61419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 20:11:21.056271   61419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:11:21.062142   61419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:11:21.062222   61419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:11:21.072818   61419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 20:11:21.090272   61419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 20:11:21.101893   61419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 20:11:21.106804   61419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 20:11:21.106873   61419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 20:11:21.113036   61419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 20:11:21.128181   61419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 20:11:21.143179   61419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 20:11:21.149191   61419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 20:11:21.149253   61419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 20:11:21.155919   61419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 20:11:21.167445   61419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 20:11:21.172156   61419 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 20:11:21.172221   61419 kubeadm.go:392] StartCluster: {Name:old-k8s-version-359369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.20.0 ClusterName:old-k8s-version-359369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.110 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:11:21.172326   61419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 20:11:21.172400   61419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 20:11:21.213972   61419 cri.go:89] found id: ""
	I1001 20:11:21.214070   61419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 20:11:21.224019   61419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:11:21.234237   61419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:11:21.243408   61419 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:11:21.243432   61419 kubeadm.go:157] found existing configuration files:
	
	I1001 20:11:21.243503   61419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:11:21.253205   61419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:11:21.253272   61419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:11:21.263635   61419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:11:21.272535   61419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:11:21.272621   61419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:11:21.282941   61419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:11:21.292299   61419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:11:21.292382   61419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:11:21.301589   61419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:11:21.310722   61419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:11:21.310796   61419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:11:21.320554   61419 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:11:21.448598   61419 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1001 20:11:21.448691   61419 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:11:21.591361   61419 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:11:21.591498   61419 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:11:21.591644   61419 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 20:11:21.774828   61419 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:11:21.776859   61419 out.go:235]   - Generating certificates and keys ...
	I1001 20:11:21.776973   61419 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:11:21.777046   61419 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:11:21.851403   61419 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 20:11:22.030317   61419 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 20:11:22.151573   61419 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 20:11:22.390772   61419 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 20:11:22.644011   61419 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 20:11:22.644242   61419 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-359369] and IPs [192.168.72.110 127.0.0.1 ::1]
	I1001 20:11:22.859510   61419 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 20:11:22.859661   61419 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-359369] and IPs [192.168.72.110 127.0.0.1 ::1]
	I1001 20:11:22.978648   61419 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 20:11:23.124947   61419 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 20:11:23.371880   61419 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 20:11:23.372000   61419 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:11:23.544635   61419 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:11:23.908521   61419 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:11:23.988273   61419 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:11:24.190381   61419 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:11:24.209847   61419 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:11:24.210296   61419 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:11:24.210383   61419 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:11:24.342298   61419 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:11:24.344025   61419 out.go:235]   - Booting up control plane ...
	I1001 20:11:24.344179   61419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:11:24.352520   61419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:11:24.353689   61419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:11:24.354807   61419 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:11:24.375470   61419 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 20:12:04.338994   61419 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1001 20:12:04.339133   61419 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:12:04.339412   61419 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:12:09.338771   61419 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:12:09.339053   61419 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:12:19.337843   61419 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:12:19.338123   61419 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:12:39.337770   61419 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:12:39.338049   61419 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:13:19.337283   61419 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:13:19.337587   61419 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:13:19.337613   61419 kubeadm.go:310] 
	I1001 20:13:19.337668   61419 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1001 20:13:19.337906   61419 kubeadm.go:310] 		timed out waiting for the condition
	I1001 20:13:19.337923   61419 kubeadm.go:310] 
	I1001 20:13:19.337972   61419 kubeadm.go:310] 	This error is likely caused by:
	I1001 20:13:19.338013   61419 kubeadm.go:310] 		- The kubelet is not running
	I1001 20:13:19.338180   61419 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1001 20:13:19.338193   61419 kubeadm.go:310] 
	I1001 20:13:19.338337   61419 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1001 20:13:19.338390   61419 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1001 20:13:19.338434   61419 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1001 20:13:19.338441   61419 kubeadm.go:310] 
	I1001 20:13:19.338594   61419 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1001 20:13:19.338717   61419 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1001 20:13:19.338724   61419 kubeadm.go:310] 
	I1001 20:13:19.338860   61419 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1001 20:13:19.339009   61419 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1001 20:13:19.339118   61419 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1001 20:13:19.339218   61419 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1001 20:13:19.339225   61419 kubeadm.go:310] 
	I1001 20:13:19.341096   61419 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:13:19.341221   61419 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1001 20:13:19.341318   61419 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1001 20:13:19.341453   61419 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-359369] and IPs [192.168.72.110 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-359369] and IPs [192.168.72.110 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-359369] and IPs [192.168.72.110 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-359369] and IPs [192.168.72.110 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1001 20:13:19.341502   61419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 20:13:19.878283   61419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:13:19.897398   61419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:13:19.912630   61419 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:13:19.912653   61419 kubeadm.go:157] found existing configuration files:
	
	I1001 20:13:19.912723   61419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:13:19.925730   61419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:13:19.925815   61419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:13:19.939784   61419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:13:19.952074   61419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:13:19.952148   61419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:13:19.965228   61419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:13:19.977176   61419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:13:19.977252   61419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:13:19.989801   61419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:13:20.001937   61419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:13:20.002010   61419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:13:20.012387   61419 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:13:20.090245   61419 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1001 20:13:20.090339   61419 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:13:20.285919   61419 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:13:20.286063   61419 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:13:20.286182   61419 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 20:13:20.501264   61419 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:13:20.671710   61419 out.go:235]   - Generating certificates and keys ...
	I1001 20:13:20.672006   61419 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:13:20.672154   61419 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:13:20.672333   61419 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 20:13:20.672502   61419 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 20:13:20.672666   61419 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 20:13:20.672853   61419 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 20:13:20.672981   61419 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 20:13:20.673104   61419 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 20:13:20.673206   61419 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 20:13:20.673328   61419 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 20:13:20.673370   61419 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 20:13:20.673444   61419 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:13:20.760470   61419 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:13:21.080235   61419 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:13:21.175265   61419 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:13:21.272193   61419 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:13:21.297353   61419 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:13:21.297521   61419 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:13:21.297615   61419 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:13:21.447099   61419 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:13:21.448853   61419 out.go:235]   - Booting up control plane ...
	I1001 20:13:21.448987   61419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:13:21.462348   61419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:13:21.463664   61419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:13:21.465062   61419 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:13:21.469038   61419 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 20:14:01.470383   61419 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1001 20:14:01.470593   61419 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:14:01.470825   61419 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:14:06.471688   61419 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:14:06.471928   61419 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:14:16.472493   61419 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:14:16.472736   61419 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:14:36.473755   61419 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:14:36.474018   61419 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:15:16.473918   61419 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:15:16.474225   61419 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:15:16.474267   61419 kubeadm.go:310] 
	I1001 20:15:16.474329   61419 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1001 20:15:16.474398   61419 kubeadm.go:310] 		timed out waiting for the condition
	I1001 20:15:16.474422   61419 kubeadm.go:310] 
	I1001 20:15:16.474483   61419 kubeadm.go:310] 	This error is likely caused by:
	I1001 20:15:16.474528   61419 kubeadm.go:310] 		- The kubelet is not running
	I1001 20:15:16.474658   61419 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1001 20:15:16.474667   61419 kubeadm.go:310] 
	I1001 20:15:16.474796   61419 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1001 20:15:16.474846   61419 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1001 20:15:16.474886   61419 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1001 20:15:16.474899   61419 kubeadm.go:310] 
	I1001 20:15:16.475026   61419 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1001 20:15:16.475148   61419 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1001 20:15:16.475174   61419 kubeadm.go:310] 
	I1001 20:15:16.475336   61419 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1001 20:15:16.475464   61419 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1001 20:15:16.475569   61419 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1001 20:15:16.475688   61419 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1001 20:15:16.475706   61419 kubeadm.go:310] 
	I1001 20:15:16.476413   61419 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:15:16.476519   61419 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1001 20:15:16.476598   61419 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1001 20:15:16.476683   61419 kubeadm.go:394] duration metric: took 3m55.30446464s to StartCluster
	I1001 20:15:16.476769   61419 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:15:16.476847   61419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:15:16.524325   61419 cri.go:89] found id: ""
	I1001 20:15:16.524372   61419 logs.go:276] 0 containers: []
	W1001 20:15:16.524396   61419 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:15:16.524405   61419 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:15:16.524500   61419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:15:16.560398   61419 cri.go:89] found id: ""
	I1001 20:15:16.560429   61419 logs.go:276] 0 containers: []
	W1001 20:15:16.560441   61419 logs.go:278] No container was found matching "etcd"
	I1001 20:15:16.560449   61419 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:15:16.560528   61419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:15:16.594744   61419 cri.go:89] found id: ""
	I1001 20:15:16.594777   61419 logs.go:276] 0 containers: []
	W1001 20:15:16.594788   61419 logs.go:278] No container was found matching "coredns"
	I1001 20:15:16.594796   61419 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:15:16.594859   61419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:15:16.640994   61419 cri.go:89] found id: ""
	I1001 20:15:16.641029   61419 logs.go:276] 0 containers: []
	W1001 20:15:16.641039   61419 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:15:16.641049   61419 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:15:16.641115   61419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:15:16.685318   61419 cri.go:89] found id: ""
	I1001 20:15:16.685343   61419 logs.go:276] 0 containers: []
	W1001 20:15:16.685354   61419 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:15:16.685362   61419 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:15:16.685430   61419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:15:16.728137   61419 cri.go:89] found id: ""
	I1001 20:15:16.728172   61419 logs.go:276] 0 containers: []
	W1001 20:15:16.728186   61419 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:15:16.728196   61419 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:15:16.728260   61419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:15:16.766271   61419 cri.go:89] found id: ""
	I1001 20:15:16.766296   61419 logs.go:276] 0 containers: []
	W1001 20:15:16.766305   61419 logs.go:278] No container was found matching "kindnet"
	I1001 20:15:16.766313   61419 logs.go:123] Gathering logs for container status ...
	I1001 20:15:16.766324   61419 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:15:16.807184   61419 logs.go:123] Gathering logs for kubelet ...
	I1001 20:15:16.807231   61419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:15:16.862780   61419 logs.go:123] Gathering logs for dmesg ...
	I1001 20:15:16.862821   61419 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:15:16.878925   61419 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:15:16.878949   61419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:15:17.027134   61419 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:15:17.027168   61419 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:15:17.027183   61419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1001 20:15:17.147293   61419 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1001 20:15:17.147361   61419 out.go:270] * 
	* 
	W1001 20:15:17.147431   61419 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1001 20:15:17.147452   61419 out.go:270] * 
	* 
	W1001 20:15:17.148396   61419 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 20:15:17.151476   61419 out.go:201] 
	W1001 20:15:17.152594   61419 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1001 20:15:17.152657   61419 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1001 20:15:17.152677   61419 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1001 20:15:17.153899   61419 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-359369 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-359369 -n old-k8s-version-359369
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-359369 -n old-k8s-version-359369: exit status 6 (249.578368ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 20:15:17.440834   64327 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-359369" does not appear in /home/jenkins/minikube-integration/19736-11198/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-359369" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (300.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-262337 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-262337 --alsologtostderr -v=3: exit status 82 (2m0.554482713s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-262337"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 20:13:19.866157   63473 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:13:19.866369   63473 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:13:19.866395   63473 out.go:358] Setting ErrFile to fd 2...
	I1001 20:13:19.866407   63473 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:13:19.866608   63473 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 20:13:19.866888   63473 out.go:352] Setting JSON to false
	I1001 20:13:19.866984   63473 mustload.go:65] Loading cluster: no-preload-262337
	I1001 20:13:19.867343   63473 config.go:182] Loaded profile config "no-preload-262337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:13:19.867428   63473 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/no-preload-262337/config.json ...
	I1001 20:13:19.867621   63473 mustload.go:65] Loading cluster: no-preload-262337
	I1001 20:13:19.867762   63473 config.go:182] Loaded profile config "no-preload-262337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:13:19.867820   63473 stop.go:39] StopHost: no-preload-262337
	I1001 20:13:19.868199   63473 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:13:19.868270   63473 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:13:19.885683   63473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33391
	I1001 20:13:19.886119   63473 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:13:19.886779   63473 main.go:141] libmachine: Using API Version  1
	I1001 20:13:19.886798   63473 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:13:19.887237   63473 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:13:19.889094   63473 out.go:177] * Stopping node "no-preload-262337"  ...
	I1001 20:13:19.893398   63473 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1001 20:13:19.893496   63473 main.go:141] libmachine: (no-preload-262337) Calling .DriverName
	I1001 20:13:19.896536   63473 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1001 20:13:19.896580   63473 main.go:141] libmachine: (no-preload-262337) Calling .GetSSHHostname
	I1001 20:13:19.900601   63473 main.go:141] libmachine: (no-preload-262337) DBG | domain no-preload-262337 has defined MAC address 52:54:00:8e:b1:d4 in network mk-no-preload-262337
	I1001 20:13:19.901161   63473 main.go:141] libmachine: (no-preload-262337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:b1:d4", ip: ""} in network mk-no-preload-262337: {Iface:virbr3 ExpiryTime:2024-10-01 21:12:12 +0000 UTC Type:0 Mac:52:54:00:8e:b1:d4 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:no-preload-262337 Clientid:01:52:54:00:8e:b1:d4}
	I1001 20:13:19.901194   63473 main.go:141] libmachine: (no-preload-262337) DBG | domain no-preload-262337 has defined IP address 192.168.61.93 and MAC address 52:54:00:8e:b1:d4 in network mk-no-preload-262337
	I1001 20:13:19.901397   63473 main.go:141] libmachine: (no-preload-262337) Calling .GetSSHPort
	I1001 20:13:19.901610   63473 main.go:141] libmachine: (no-preload-262337) Calling .GetSSHKeyPath
	I1001 20:13:19.901783   63473 main.go:141] libmachine: (no-preload-262337) Calling .GetSSHUsername
	I1001 20:13:19.902068   63473 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/no-preload-262337/id_rsa Username:docker}
	I1001 20:13:20.005047   63473 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1001 20:13:20.081803   63473 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1001 20:13:20.146504   63473 main.go:141] libmachine: Stopping "no-preload-262337"...
	I1001 20:13:20.146541   63473 main.go:141] libmachine: (no-preload-262337) Calling .GetState
	I1001 20:13:20.148490   63473 main.go:141] libmachine: (no-preload-262337) Calling .Stop
	I1001 20:13:20.152593   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 0/120
	I1001 20:13:21.155228   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 1/120
	I1001 20:13:22.156730   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 2/120
	I1001 20:13:23.158399   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 3/120
	I1001 20:13:24.160326   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 4/120
	I1001 20:13:25.162732   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 5/120
	I1001 20:13:26.164802   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 6/120
	I1001 20:13:27.166310   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 7/120
	I1001 20:13:28.167948   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 8/120
	I1001 20:13:29.169377   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 9/120
	I1001 20:13:30.171844   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 10/120
	I1001 20:13:31.173345   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 11/120
	I1001 20:13:32.175045   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 12/120
	I1001 20:13:33.176534   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 13/120
	I1001 20:13:34.178045   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 14/120
	I1001 20:13:35.180092   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 15/120
	I1001 20:13:36.181346   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 16/120
	I1001 20:13:37.182701   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 17/120
	I1001 20:13:38.184056   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 18/120
	I1001 20:13:39.185845   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 19/120
	I1001 20:13:40.188187   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 20/120
	I1001 20:13:41.189730   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 21/120
	I1001 20:13:42.191095   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 22/120
	I1001 20:13:43.192575   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 23/120
	I1001 20:13:44.195109   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 24/120
	I1001 20:13:45.197059   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 25/120
	I1001 20:13:46.198687   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 26/120
	I1001 20:13:47.200384   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 27/120
	I1001 20:13:48.201852   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 28/120
	I1001 20:13:49.203709   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 29/120
	I1001 20:13:50.205501   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 30/120
	I1001 20:13:51.206983   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 31/120
	I1001 20:13:52.208454   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 32/120
	I1001 20:13:53.209925   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 33/120
	I1001 20:13:54.211364   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 34/120
	I1001 20:13:55.213883   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 35/120
	I1001 20:13:56.215621   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 36/120
	I1001 20:13:57.217106   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 37/120
	I1001 20:13:58.218926   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 38/120
	I1001 20:13:59.220326   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 39/120
	I1001 20:14:00.222783   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 40/120
	I1001 20:14:01.224395   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 41/120
	I1001 20:14:02.226075   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 42/120
	I1001 20:14:03.227489   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 43/120
	I1001 20:14:04.228946   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 44/120
	I1001 20:14:05.230983   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 45/120
	I1001 20:14:06.232687   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 46/120
	I1001 20:14:07.235091   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 47/120
	I1001 20:14:08.237089   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 48/120
	I1001 20:14:09.238580   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 49/120
	I1001 20:14:10.240296   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 50/120
	I1001 20:14:11.241965   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 51/120
	I1001 20:14:12.243848   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 52/120
	I1001 20:14:13.245758   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 53/120
	I1001 20:14:14.247803   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 54/120
	I1001 20:14:15.250402   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 55/120
	I1001 20:14:16.251828   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 56/120
	I1001 20:14:17.253723   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 57/120
	I1001 20:14:18.255657   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 58/120
	I1001 20:14:19.257028   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 59/120
	I1001 20:14:20.258616   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 60/120
	I1001 20:14:21.260158   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 61/120
	I1001 20:14:22.261567   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 62/120
	I1001 20:14:23.263011   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 63/120
	I1001 20:14:24.264704   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 64/120
	I1001 20:14:25.266913   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 65/120
	I1001 20:14:26.268542   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 66/120
	I1001 20:14:27.270140   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 67/120
	I1001 20:14:28.271465   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 68/120
	I1001 20:14:29.272863   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 69/120
	I1001 20:14:30.274039   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 70/120
	I1001 20:14:31.275511   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 71/120
	I1001 20:14:32.276840   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 72/120
	I1001 20:14:33.279080   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 73/120
	I1001 20:14:34.280624   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 74/120
	I1001 20:14:35.282672   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 75/120
	I1001 20:14:36.284256   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 76/120
	I1001 20:14:37.285598   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 77/120
	I1001 20:14:38.287282   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 78/120
	I1001 20:14:39.288896   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 79/120
	I1001 20:14:40.291300   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 80/120
	I1001 20:14:41.292954   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 81/120
	I1001 20:14:42.294388   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 82/120
	I1001 20:14:43.295992   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 83/120
	I1001 20:14:44.297551   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 84/120
	I1001 20:14:45.299695   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 85/120
	I1001 20:14:46.301513   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 86/120
	I1001 20:14:47.303054   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 87/120
	I1001 20:14:48.304477   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 88/120
	I1001 20:14:49.306058   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 89/120
	I1001 20:14:50.308494   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 90/120
	I1001 20:14:51.310704   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 91/120
	I1001 20:14:52.312566   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 92/120
	I1001 20:14:53.314084   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 93/120
	I1001 20:14:54.315617   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 94/120
	I1001 20:14:55.317954   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 95/120
	I1001 20:14:56.319747   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 96/120
	I1001 20:14:57.321368   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 97/120
	I1001 20:14:58.322946   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 98/120
	I1001 20:14:59.324715   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 99/120
	I1001 20:15:00.326980   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 100/120
	I1001 20:15:01.328611   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 101/120
	I1001 20:15:02.330009   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 102/120
	I1001 20:15:03.331639   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 103/120
	I1001 20:15:04.332989   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 104/120
	I1001 20:15:05.334977   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 105/120
	I1001 20:15:06.336377   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 106/120
	I1001 20:15:07.337749   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 107/120
	I1001 20:15:08.339185   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 108/120
	I1001 20:15:09.340730   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 109/120
	I1001 20:15:10.343035   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 110/120
	I1001 20:15:11.344584   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 111/120
	I1001 20:15:12.345984   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 112/120
	I1001 20:15:13.347509   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 113/120
	I1001 20:15:14.348972   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 114/120
	I1001 20:15:15.351185   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 115/120
	I1001 20:15:16.353767   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 116/120
	I1001 20:15:17.355058   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 117/120
	I1001 20:15:18.357229   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 118/120
	I1001 20:15:19.358717   63473 main.go:141] libmachine: (no-preload-262337) Waiting for machine to stop 119/120
	I1001 20:15:20.359481   63473 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1001 20:15:20.359553   63473 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1001 20:15:20.361385   63473 out.go:201] 
	W1001 20:15:20.362665   63473 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1001 20:15:20.362684   63473 out.go:270] * 
	* 
	W1001 20:15:20.365246   63473 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 20:15:20.366327   63473 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-262337 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-262337 -n no-preload-262337
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-262337 -n no-preload-262337: exit status 3 (18.62910854s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 20:15:38.996767   64470 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.93:22: connect: no route to host
	E1001 20:15:38.996787   64470 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.93:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-262337" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-106982 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-106982 --alsologtostderr -v=3: exit status 82 (2m0.540000601s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-106982"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 20:13:51.487682   63735 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:13:51.488030   63735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:13:51.488046   63735 out.go:358] Setting ErrFile to fd 2...
	I1001 20:13:51.488053   63735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:13:51.488350   63735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 20:13:51.488723   63735 out.go:352] Setting JSON to false
	I1001 20:13:51.488845   63735 mustload.go:65] Loading cluster: embed-certs-106982
	I1001 20:13:51.489378   63735 config.go:182] Loaded profile config "embed-certs-106982": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:13:51.489490   63735 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/embed-certs-106982/config.json ...
	I1001 20:13:51.489772   63735 mustload.go:65] Loading cluster: embed-certs-106982
	I1001 20:13:51.489940   63735 config.go:182] Loaded profile config "embed-certs-106982": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:13:51.489981   63735 stop.go:39] StopHost: embed-certs-106982
	I1001 20:13:51.490588   63735 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:13:51.490652   63735 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:13:51.505305   63735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39833
	I1001 20:13:51.505722   63735 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:13:51.506261   63735 main.go:141] libmachine: Using API Version  1
	I1001 20:13:51.506281   63735 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:13:51.506617   63735 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:13:51.508480   63735 out.go:177] * Stopping node "embed-certs-106982"  ...
	I1001 20:13:51.509861   63735 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1001 20:13:51.509895   63735 main.go:141] libmachine: (embed-certs-106982) Calling .DriverName
	I1001 20:13:51.510124   63735 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1001 20:13:51.510157   63735 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHHostname
	I1001 20:13:51.512934   63735 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:13:51.513349   63735 main.go:141] libmachine: (embed-certs-106982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:ab:67", ip: ""} in network mk-embed-certs-106982: {Iface:virbr1 ExpiryTime:2024-10-01 21:13:01 +0000 UTC Type:0 Mac:52:54:00:7f:ab:67 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:embed-certs-106982 Clientid:01:52:54:00:7f:ab:67}
	I1001 20:13:51.513364   63735 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined IP address 192.168.39.203 and MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:13:51.513604   63735 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHPort
	I1001 20:13:51.513810   63735 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHKeyPath
	I1001 20:13:51.513953   63735 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHUsername
	I1001 20:13:51.514100   63735 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/embed-certs-106982/id_rsa Username:docker}
	I1001 20:13:51.610584   63735 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1001 20:13:51.680921   63735 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1001 20:13:51.757383   63735 main.go:141] libmachine: Stopping "embed-certs-106982"...
	I1001 20:13:51.757425   63735 main.go:141] libmachine: (embed-certs-106982) Calling .GetState
	I1001 20:13:51.759068   63735 main.go:141] libmachine: (embed-certs-106982) Calling .Stop
	I1001 20:13:51.762604   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 0/120
	I1001 20:13:52.764069   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 1/120
	I1001 20:13:53.765412   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 2/120
	I1001 20:13:54.766835   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 3/120
	I1001 20:13:55.768277   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 4/120
	I1001 20:13:56.770582   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 5/120
	I1001 20:13:57.772128   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 6/120
	I1001 20:13:58.773718   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 7/120
	I1001 20:13:59.775445   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 8/120
	I1001 20:14:00.777121   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 9/120
	I1001 20:14:01.779161   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 10/120
	I1001 20:14:02.780647   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 11/120
	I1001 20:14:03.782149   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 12/120
	I1001 20:14:04.783538   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 13/120
	I1001 20:14:05.785168   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 14/120
	I1001 20:14:06.787542   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 15/120
	I1001 20:14:07.788988   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 16/120
	I1001 20:14:08.790559   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 17/120
	I1001 20:14:09.792240   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 18/120
	I1001 20:14:10.794217   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 19/120
	I1001 20:14:11.795768   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 20/120
	I1001 20:14:12.797512   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 21/120
	I1001 20:14:13.799068   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 22/120
	I1001 20:14:14.800812   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 23/120
	I1001 20:14:15.802396   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 24/120
	I1001 20:14:16.804677   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 25/120
	I1001 20:14:17.807117   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 26/120
	I1001 20:14:18.808863   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 27/120
	I1001 20:14:19.811387   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 28/120
	I1001 20:14:20.812819   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 29/120
	I1001 20:14:21.815091   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 30/120
	I1001 20:14:22.816842   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 31/120
	I1001 20:14:23.818228   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 32/120
	I1001 20:14:24.819592   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 33/120
	I1001 20:14:25.821034   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 34/120
	I1001 20:14:26.823590   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 35/120
	I1001 20:14:27.825717   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 36/120
	I1001 20:14:28.827250   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 37/120
	I1001 20:14:29.828790   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 38/120
	I1001 20:14:30.830348   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 39/120
	I1001 20:14:31.832651   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 40/120
	I1001 20:14:32.835155   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 41/120
	I1001 20:14:33.837266   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 42/120
	I1001 20:14:34.839358   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 43/120
	I1001 20:14:35.841029   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 44/120
	I1001 20:14:36.843719   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 45/120
	I1001 20:14:37.845306   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 46/120
	I1001 20:14:38.847081   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 47/120
	I1001 20:14:39.848784   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 48/120
	I1001 20:14:40.850596   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 49/120
	I1001 20:14:41.852961   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 50/120
	I1001 20:14:42.854741   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 51/120
	I1001 20:14:43.856261   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 52/120
	I1001 20:14:44.857967   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 53/120
	I1001 20:14:45.859485   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 54/120
	I1001 20:14:46.861525   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 55/120
	I1001 20:14:47.862982   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 56/120
	I1001 20:14:48.864558   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 57/120
	I1001 20:14:49.866867   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 58/120
	I1001 20:14:50.869213   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 59/120
	I1001 20:14:51.871676   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 60/120
	I1001 20:14:52.873486   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 61/120
	I1001 20:14:53.875214   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 62/120
	I1001 20:14:54.876849   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 63/120
	I1001 20:14:55.878629   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 64/120
	I1001 20:14:56.881373   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 65/120
	I1001 20:14:57.882942   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 66/120
	I1001 20:14:58.884670   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 67/120
	I1001 20:14:59.887070   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 68/120
	I1001 20:15:00.888756   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 69/120
	I1001 20:15:01.891152   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 70/120
	I1001 20:15:02.892962   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 71/120
	I1001 20:15:03.894595   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 72/120
	I1001 20:15:04.896012   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 73/120
	I1001 20:15:05.897537   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 74/120
	I1001 20:15:06.899740   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 75/120
	I1001 20:15:07.901329   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 76/120
	I1001 20:15:08.902961   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 77/120
	I1001 20:15:09.904380   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 78/120
	I1001 20:15:10.905903   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 79/120
	I1001 20:15:11.908639   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 80/120
	I1001 20:15:12.910955   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 81/120
	I1001 20:15:13.912543   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 82/120
	I1001 20:15:14.913924   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 83/120
	I1001 20:15:15.915683   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 84/120
	I1001 20:15:16.917415   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 85/120
	I1001 20:15:17.919399   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 86/120
	I1001 20:15:18.920804   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 87/120
	I1001 20:15:19.923001   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 88/120
	I1001 20:15:20.924498   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 89/120
	I1001 20:15:21.925841   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 90/120
	I1001 20:15:22.928473   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 91/120
	I1001 20:15:23.929998   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 92/120
	I1001 20:15:24.931629   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 93/120
	I1001 20:15:25.933272   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 94/120
	I1001 20:15:26.935586   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 95/120
	I1001 20:15:27.937238   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 96/120
	I1001 20:15:28.938770   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 97/120
	I1001 20:15:29.940385   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 98/120
	I1001 20:15:30.942182   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 99/120
	I1001 20:15:31.944287   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 100/120
	I1001 20:15:32.946304   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 101/120
	I1001 20:15:33.947831   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 102/120
	I1001 20:15:34.949206   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 103/120
	I1001 20:15:35.950789   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 104/120
	I1001 20:15:36.952844   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 105/120
	I1001 20:15:37.954948   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 106/120
	I1001 20:15:38.956648   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 107/120
	I1001 20:15:39.958055   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 108/120
	I1001 20:15:40.959463   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 109/120
	I1001 20:15:41.960701   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 110/120
	I1001 20:15:42.962234   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 111/120
	I1001 20:15:43.963504   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 112/120
	I1001 20:15:44.964921   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 113/120
	I1001 20:15:45.966408   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 114/120
	I1001 20:15:46.968623   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 115/120
	I1001 20:15:47.970249   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 116/120
	I1001 20:15:48.971884   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 117/120
	I1001 20:15:49.973450   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 118/120
	I1001 20:15:50.974810   63735 main.go:141] libmachine: (embed-certs-106982) Waiting for machine to stop 119/120
	I1001 20:15:51.976047   63735 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1001 20:15:51.976104   63735 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1001 20:15:51.977906   63735 out.go:201] 
	W1001 20:15:51.979241   63735 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1001 20:15:51.979256   63735 out.go:270] * 
	* 
	W1001 20:15:51.981821   63735 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 20:15:51.983039   63735 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-106982 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-106982 -n embed-certs-106982
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-106982 -n embed-certs-106982: exit status 3 (18.500905104s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 20:16:10.484682   64716 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	E1001 20:16:10.484712   64716 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-106982" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-359369 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-359369 create -f testdata/busybox.yaml: exit status 1 (48.624649ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-359369" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-359369 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-359369 -n old-k8s-version-359369
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-359369 -n old-k8s-version-359369: exit status 6 (231.586039ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 20:15:17.725657   64366 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-359369" does not appear in /home/jenkins/minikube-integration/19736-11198/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-359369" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-359369 -n old-k8s-version-359369
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-359369 -n old-k8s-version-359369: exit status 6 (269.888895ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 20:15:17.993936   64395 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-359369" does not appear in /home/jenkins/minikube-integration/19736-11198/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-359369" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (102.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-359369 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-359369 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m42.409596277s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-359369 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-359369 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-359369 describe deploy/metrics-server -n kube-system: exit status 1 (43.852597ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-359369" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-359369 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-359369 -n old-k8s-version-359369
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-359369 -n old-k8s-version-359369: exit status 6 (221.104666ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 20:17:00.672615   65454 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-359369" does not appear in /home/jenkins/minikube-integration/19736-11198/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-359369" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (102.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-262337 -n no-preload-262337
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-262337 -n no-preload-262337: exit status 3 (3.168000006s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 20:15:42.164755   64550 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.93:22: connect: no route to host
	E1001 20:15:42.164781   64550 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.93:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-262337 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-262337 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152680727s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.93:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-262337 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-262337 -n no-preload-262337
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-262337 -n no-preload-262337: exit status 3 (3.063401093s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 20:15:51.380843   64630 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.93:22: connect: no route to host
	E1001 20:15:51.380864   64630 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.93:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-262337" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-106982 -n embed-certs-106982
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-106982 -n embed-certs-106982: exit status 3 (3.168352241s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 20:16:13.652926   64825 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	E1001 20:16:13.652964   64825 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-106982 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-106982 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.149587204s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-106982 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-106982 -n embed-certs-106982
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-106982 -n embed-certs-106982: exit status 3 (3.064949832s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 20:16:22.868813   65201 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	E1001 20:16:22.868837   65201 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-106982" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (755.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-359369 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1001 20:21:34.839807   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-359369 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m34.130380012s)

                                                
                                                
-- stdout --
	* [old-k8s-version-359369] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-359369" primary control-plane node in "old-k8s-version-359369" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-359369" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 20:17:07.212312   65592 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:17:07.212454   65592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:17:07.212464   65592 out.go:358] Setting ErrFile to fd 2...
	I1001 20:17:07.212468   65592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:17:07.212642   65592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 20:17:07.213181   65592 out.go:352] Setting JSON to false
	I1001 20:17:07.214175   65592 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7169,"bootTime":1727806658,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 20:17:07.214276   65592 start.go:139] virtualization: kvm guest
	I1001 20:17:07.216064   65592 out.go:177] * [old-k8s-version-359369] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 20:17:07.217304   65592 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 20:17:07.217303   65592 notify.go:220] Checking for updates...
	I1001 20:17:07.218691   65592 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 20:17:07.220050   65592 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:17:07.221584   65592 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:17:07.222964   65592 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 20:17:07.224103   65592 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 20:17:07.225906   65592 config.go:182] Loaded profile config "old-k8s-version-359369": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1001 20:17:07.226515   65592 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:17:07.226598   65592 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:17:07.241978   65592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41435
	I1001 20:17:07.242391   65592 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:17:07.242984   65592 main.go:141] libmachine: Using API Version  1
	I1001 20:17:07.243000   65592 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:17:07.243408   65592 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:17:07.243594   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .DriverName
	I1001 20:17:07.245179   65592 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1001 20:17:07.246276   65592 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 20:17:07.246593   65592 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:17:07.246639   65592 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:17:07.261984   65592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42085
	I1001 20:17:07.262423   65592 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:17:07.262902   65592 main.go:141] libmachine: Using API Version  1
	I1001 20:17:07.262921   65592 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:17:07.263265   65592 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:17:07.263495   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .DriverName
	I1001 20:17:07.300046   65592 out.go:177] * Using the kvm2 driver based on existing profile
	I1001 20:17:07.301173   65592 start.go:297] selected driver: kvm2
	I1001 20:17:07.301192   65592 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-359369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-359369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.110 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:17:07.301360   65592 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 20:17:07.302073   65592 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:17:07.302146   65592 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 20:17:07.317372   65592 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 20:17:07.317810   65592 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:17:07.317842   65592 cni.go:84] Creating CNI manager for ""
	I1001 20:17:07.317888   65592 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:17:07.317920   65592 start.go:340] cluster config:
	{Name:old-k8s-version-359369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-359369 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.110 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:17:07.318033   65592 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:17:07.319705   65592 out.go:177] * Starting "old-k8s-version-359369" primary control-plane node in "old-k8s-version-359369" cluster
	I1001 20:17:07.320907   65592 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1001 20:17:07.320965   65592 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1001 20:17:07.320975   65592 cache.go:56] Caching tarball of preloaded images
	I1001 20:17:07.321081   65592 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 20:17:07.321092   65592 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1001 20:17:07.321185   65592 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/config.json ...
	I1001 20:17:07.321390   65592 start.go:360] acquireMachinesLock for old-k8s-version-359369: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 20:21:13.849614   65592 start.go:364] duration metric: took 4m6.528172495s to acquireMachinesLock for "old-k8s-version-359369"
	I1001 20:21:13.849680   65592 start.go:96] Skipping create...Using existing machine configuration
	I1001 20:21:13.849691   65592 fix.go:54] fixHost starting: 
	I1001 20:21:13.850236   65592 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:21:13.850288   65592 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:21:13.868582   65592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43577
	I1001 20:21:13.869023   65592 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:21:13.869557   65592 main.go:141] libmachine: Using API Version  1
	I1001 20:21:13.869586   65592 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:21:13.869948   65592 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:21:13.870138   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .DriverName
	I1001 20:21:13.870279   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetState
	I1001 20:21:13.872007   65592 fix.go:112] recreateIfNeeded on old-k8s-version-359369: state=Stopped err=<nil>
	I1001 20:21:13.872053   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .DriverName
	W1001 20:21:13.872208   65592 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 20:21:13.873879   65592 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-359369" ...
	I1001 20:21:13.874857   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .Start
	I1001 20:21:13.875038   65592 main.go:141] libmachine: (old-k8s-version-359369) Ensuring networks are active...
	I1001 20:21:13.875847   65592 main.go:141] libmachine: (old-k8s-version-359369) Ensuring network default is active
	I1001 20:21:13.876319   65592 main.go:141] libmachine: (old-k8s-version-359369) Ensuring network mk-old-k8s-version-359369 is active
	I1001 20:21:13.876783   65592 main.go:141] libmachine: (old-k8s-version-359369) Getting domain xml...
	I1001 20:21:13.877508   65592 main.go:141] libmachine: (old-k8s-version-359369) Creating domain...
	I1001 20:21:15.346362   65592 main.go:141] libmachine: (old-k8s-version-359369) Waiting to get IP...
	I1001 20:21:15.347302   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:15.347764   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:21:15.347879   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:21:15.347744   67099 retry.go:31] will retry after 194.380129ms: waiting for machine to come up
	I1001 20:21:15.544378   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:15.544831   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:21:15.544855   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:21:15.544815   67099 retry.go:31] will retry after 288.413932ms: waiting for machine to come up
	I1001 20:21:15.836027   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:15.836760   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:21:15.836784   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:21:15.836721   67099 retry.go:31] will retry after 318.226557ms: waiting for machine to come up
	I1001 20:21:16.156381   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:16.157109   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:21:16.157142   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:21:16.157006   67099 retry.go:31] will retry after 554.679532ms: waiting for machine to come up
	I1001 20:21:16.713817   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:16.714607   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:21:16.714634   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:21:16.714537   67099 retry.go:31] will retry after 654.862993ms: waiting for machine to come up
	I1001 20:21:17.371696   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:17.372240   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:21:17.372273   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:21:17.372179   67099 retry.go:31] will retry after 704.775178ms: waiting for machine to come up
	I1001 20:21:18.078819   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:18.079622   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:21:18.079644   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:21:18.079534   67099 retry.go:31] will retry after 943.093679ms: waiting for machine to come up
	I1001 20:21:19.024895   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:19.025364   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:21:19.025390   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:21:19.025322   67099 retry.go:31] will retry after 1.379979688s: waiting for machine to come up
	I1001 20:21:20.406769   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:20.407230   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:21:20.407257   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:21:20.407194   67099 retry.go:31] will retry after 1.229852512s: waiting for machine to come up
	I1001 20:21:21.638879   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:21.639469   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:21:21.639498   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:21:21.639401   67099 retry.go:31] will retry after 2.32215262s: waiting for machine to come up
	I1001 20:21:23.962949   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:23.963460   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:21:23.963488   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:21:23.963405   67099 retry.go:31] will retry after 2.860779972s: waiting for machine to come up
	I1001 20:21:26.825613   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:26.826103   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:21:26.826134   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:21:26.826057   67099 retry.go:31] will retry after 2.74634527s: waiting for machine to come up
	I1001 20:21:29.573629   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:29.574174   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | unable to find current IP address of domain old-k8s-version-359369 in network mk-old-k8s-version-359369
	I1001 20:21:29.574205   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | I1001 20:21:29.574117   67099 retry.go:31] will retry after 4.529467207s: waiting for machine to come up
	I1001 20:21:34.105262   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:34.105795   65592 main.go:141] libmachine: (old-k8s-version-359369) Found IP for machine: 192.168.72.110
	I1001 20:21:34.105828   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has current primary IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:34.105839   65592 main.go:141] libmachine: (old-k8s-version-359369) Reserving static IP address...
	I1001 20:21:34.106311   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "old-k8s-version-359369", mac: "52:54:00:b5:7f:54", ip: "192.168.72.110"} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:21:25 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:21:34.106332   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | skip adding static IP to network mk-old-k8s-version-359369 - found existing host DHCP lease matching {name: "old-k8s-version-359369", mac: "52:54:00:b5:7f:54", ip: "192.168.72.110"}
	I1001 20:21:34.106341   65592 main.go:141] libmachine: (old-k8s-version-359369) Reserved static IP address: 192.168.72.110
	I1001 20:21:34.106354   65592 main.go:141] libmachine: (old-k8s-version-359369) Waiting for SSH to be available...
	I1001 20:21:34.106391   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | Getting to WaitForSSH function...
	I1001 20:21:34.108833   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:34.109184   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:21:25 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:21:34.109212   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:34.109336   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | Using SSH client type: external
	I1001 20:21:34.109363   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/old-k8s-version-359369/id_rsa (-rw-------)
	I1001 20:21:34.109391   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/old-k8s-version-359369/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 20:21:34.109405   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | About to run SSH command:
	I1001 20:21:34.109430   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | exit 0
	I1001 20:21:34.228540   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | SSH cmd err, output: <nil>: 
	I1001 20:21:34.228943   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetConfigRaw
	I1001 20:21:34.229691   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetIP
	I1001 20:21:34.232105   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:34.232586   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:21:25 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:21:34.232616   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:34.232978   65592 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/config.json ...
	I1001 20:21:34.233225   65592 machine.go:93] provisionDockerMachine start ...
	I1001 20:21:34.233246   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .DriverName
	I1001 20:21:34.233490   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHHostname
	I1001 20:21:34.236135   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:34.236465   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:21:25 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:21:34.236496   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:34.236649   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHPort
	I1001 20:21:34.236861   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:21:34.237011   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:21:34.237145   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHUsername
	I1001 20:21:34.237314   65592 main.go:141] libmachine: Using SSH client type: native
	I1001 20:21:34.237509   65592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.110 22 <nil> <nil>}
	I1001 20:21:34.237520   65592 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 20:21:34.336351   65592 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1001 20:21:34.336399   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetMachineName
	I1001 20:21:34.336626   65592 buildroot.go:166] provisioning hostname "old-k8s-version-359369"
	I1001 20:21:34.336650   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetMachineName
	I1001 20:21:34.336808   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHHostname
	I1001 20:21:34.339435   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:34.339789   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:21:25 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:21:34.339821   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:34.339963   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHPort
	I1001 20:21:34.340162   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:21:34.340320   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:21:34.340476   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHUsername
	I1001 20:21:34.340635   65592 main.go:141] libmachine: Using SSH client type: native
	I1001 20:21:34.340856   65592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.110 22 <nil> <nil>}
	I1001 20:21:34.340872   65592 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-359369 && echo "old-k8s-version-359369" | sudo tee /etc/hostname
	I1001 20:21:34.456191   65592 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-359369
	
	I1001 20:21:34.456231   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHHostname
	I1001 20:21:34.459363   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:34.459763   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:21:25 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:21:34.459794   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:34.460009   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHPort
	I1001 20:21:34.460168   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:21:34.460329   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:21:34.460486   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHUsername
	I1001 20:21:34.460665   65592 main.go:141] libmachine: Using SSH client type: native
	I1001 20:21:34.460837   65592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.110 22 <nil> <nil>}
	I1001 20:21:34.460853   65592 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-359369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-359369/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-359369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 20:21:34.568463   65592 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:21:34.568502   65592 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 20:21:34.568552   65592 buildroot.go:174] setting up certificates
	I1001 20:21:34.568568   65592 provision.go:84] configureAuth start
	I1001 20:21:34.568582   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetMachineName
	I1001 20:21:34.568876   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetIP
	I1001 20:21:34.571541   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:34.571950   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:21:25 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:21:34.571979   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:34.572106   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHHostname
	I1001 20:21:34.574468   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:34.574868   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:21:25 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:21:34.574913   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:34.575035   65592 provision.go:143] copyHostCerts
	I1001 20:21:34.575094   65592 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 20:21:34.575104   65592 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 20:21:34.575169   65592 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 20:21:34.575279   65592 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 20:21:34.575289   65592 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 20:21:34.575313   65592 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 20:21:34.575375   65592 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 20:21:34.575382   65592 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 20:21:34.575400   65592 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 20:21:34.575453   65592 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-359369 san=[127.0.0.1 192.168.72.110 localhost minikube old-k8s-version-359369]
	I1001 20:21:35.113691   65592 provision.go:177] copyRemoteCerts
	I1001 20:21:35.113746   65592 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 20:21:35.113771   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHHostname
	I1001 20:21:35.116565   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:35.116861   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:21:25 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:21:35.116902   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:35.117058   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHPort
	I1001 20:21:35.117290   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:21:35.117456   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHUsername
	I1001 20:21:35.117585   65592 sshutil.go:53] new ssh client: &{IP:192.168.72.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/old-k8s-version-359369/id_rsa Username:docker}
	I1001 20:21:35.194699   65592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 20:21:35.218424   65592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1001 20:21:35.242277   65592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 20:21:35.266045   65592 provision.go:87] duration metric: took 697.462168ms to configureAuth
	I1001 20:21:35.266077   65592 buildroot.go:189] setting minikube options for container-runtime
	I1001 20:21:35.266296   65592 config.go:182] Loaded profile config "old-k8s-version-359369": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1001 20:21:35.266396   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHHostname
	I1001 20:21:35.268996   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:35.269335   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:21:25 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:21:35.269364   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:35.269527   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHPort
	I1001 20:21:35.269690   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:21:35.269828   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:21:35.269966   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHUsername
	I1001 20:21:35.270097   65592 main.go:141] libmachine: Using SSH client type: native
	I1001 20:21:35.270280   65592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.110 22 <nil> <nil>}
	I1001 20:21:35.270296   65592 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 20:21:35.481755   65592 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 20:21:35.481785   65592 machine.go:96] duration metric: took 1.248545118s to provisionDockerMachine
	I1001 20:21:35.481799   65592 start.go:293] postStartSetup for "old-k8s-version-359369" (driver="kvm2")
	I1001 20:21:35.481812   65592 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 20:21:35.481852   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .DriverName
	I1001 20:21:35.482190   65592 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 20:21:35.482214   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHHostname
	I1001 20:21:35.484650   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:35.485033   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:21:25 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:21:35.485063   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:35.485160   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHPort
	I1001 20:21:35.485348   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:21:35.485547   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHUsername
	I1001 20:21:35.485679   65592 sshutil.go:53] new ssh client: &{IP:192.168.72.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/old-k8s-version-359369/id_rsa Username:docker}
	I1001 20:21:35.563462   65592 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 20:21:35.567719   65592 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 20:21:35.567744   65592 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 20:21:35.567815   65592 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 20:21:35.567909   65592 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 20:21:35.568027   65592 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 20:21:35.577303   65592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:21:35.600070   65592 start.go:296] duration metric: took 118.255802ms for postStartSetup
	I1001 20:21:35.600120   65592 fix.go:56] duration metric: took 21.750429263s for fixHost
	I1001 20:21:35.600149   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHHostname
	I1001 20:21:35.602779   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:35.603100   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:21:25 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:21:35.603131   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:35.603325   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHPort
	I1001 20:21:35.603540   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:21:35.603737   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:21:35.603918   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHUsername
	I1001 20:21:35.604113   65592 main.go:141] libmachine: Using SSH client type: native
	I1001 20:21:35.604316   65592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.110 22 <nil> <nil>}
	I1001 20:21:35.604328   65592 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 20:21:35.705002   65592 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727814095.681124283
	
	I1001 20:21:35.705026   65592 fix.go:216] guest clock: 1727814095.681124283
	I1001 20:21:35.705049   65592 fix.go:229] Guest: 2024-10-01 20:21:35.681124283 +0000 UTC Remote: 2024-10-01 20:21:35.600127058 +0000 UTC m=+268.426532967 (delta=80.997225ms)
	I1001 20:21:35.705078   65592 fix.go:200] guest clock delta is within tolerance: 80.997225ms
	I1001 20:21:35.705085   65592 start.go:83] releasing machines lock for "old-k8s-version-359369", held for 21.855433494s
	I1001 20:21:35.705126   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .DriverName
	I1001 20:21:35.705374   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetIP
	I1001 20:21:35.708706   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:35.709125   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:21:25 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:21:35.709155   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:35.709439   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .DriverName
	I1001 20:21:35.710007   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .DriverName
	I1001 20:21:35.710188   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .DriverName
	I1001 20:21:35.710266   65592 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 20:21:35.710320   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHHostname
	I1001 20:21:35.710378   65592 ssh_runner.go:195] Run: cat /version.json
	I1001 20:21:35.710405   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHHostname
	I1001 20:21:35.713705   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:35.713877   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:35.714027   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:21:25 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:21:35.714053   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:35.714301   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHPort
	I1001 20:21:35.714401   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:21:25 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:21:35.714431   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:35.714469   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:21:35.714535   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHPort
	I1001 20:21:35.714619   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHUsername
	I1001 20:21:35.714707   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHKeyPath
	I1001 20:21:35.714754   65592 sshutil.go:53] new ssh client: &{IP:192.168.72.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/old-k8s-version-359369/id_rsa Username:docker}
	I1001 20:21:35.714978   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetSSHUsername
	I1001 20:21:35.715118   65592 sshutil.go:53] new ssh client: &{IP:192.168.72.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/old-k8s-version-359369/id_rsa Username:docker}
	I1001 20:21:35.789935   65592 ssh_runner.go:195] Run: systemctl --version
	I1001 20:21:35.826166   65592 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 20:21:35.974252   65592 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 20:21:35.981702   65592 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 20:21:35.981786   65592 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 20:21:36.001533   65592 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 20:21:36.001561   65592 start.go:495] detecting cgroup driver to use...
	I1001 20:21:36.001640   65592 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 20:21:36.022135   65592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 20:21:36.038518   65592 docker.go:217] disabling cri-docker service (if available) ...
	I1001 20:21:36.038604   65592 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 20:21:36.054304   65592 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 20:21:36.072803   65592 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 20:21:36.198500   65592 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 20:21:36.364044   65592 docker.go:233] disabling docker service ...
	I1001 20:21:36.364113   65592 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 20:21:36.378428   65592 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 20:21:36.393570   65592 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 20:21:36.537617   65592 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 20:21:36.663369   65592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 20:21:36.677709   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 20:21:36.696864   65592 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1001 20:21:36.696937   65592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:21:36.708230   65592 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 20:21:36.708314   65592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:21:36.718729   65592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:21:36.729473   65592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:21:36.741956   65592 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 20:21:36.753103   65592 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 20:21:36.762997   65592 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 20:21:36.763069   65592 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 20:21:36.776507   65592 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 20:21:36.788404   65592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:21:36.926460   65592 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 20:21:37.034059   65592 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 20:21:37.034132   65592 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 20:21:37.039051   65592 start.go:563] Will wait 60s for crictl version
	I1001 20:21:37.039110   65592 ssh_runner.go:195] Run: which crictl
	I1001 20:21:37.042945   65592 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 20:21:37.088196   65592 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 20:21:37.088408   65592 ssh_runner.go:195] Run: crio --version
	I1001 20:21:37.116940   65592 ssh_runner.go:195] Run: crio --version
	I1001 20:21:37.153040   65592 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1001 20:21:37.154087   65592 main.go:141] libmachine: (old-k8s-version-359369) Calling .GetIP
	I1001 20:21:37.157319   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:37.157910   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:7f:54", ip: ""} in network mk-old-k8s-version-359369: {Iface:virbr4 ExpiryTime:2024-10-01 21:21:25 +0000 UTC Type:0 Mac:52:54:00:b5:7f:54 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:old-k8s-version-359369 Clientid:01:52:54:00:b5:7f:54}
	I1001 20:21:37.157938   65592 main.go:141] libmachine: (old-k8s-version-359369) DBG | domain old-k8s-version-359369 has defined IP address 192.168.72.110 and MAC address 52:54:00:b5:7f:54 in network mk-old-k8s-version-359369
	I1001 20:21:37.158185   65592 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1001 20:21:37.163955   65592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:21:37.177060   65592 kubeadm.go:883] updating cluster {Name:old-k8s-version-359369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:old-k8s-version-359369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.110 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 20:21:37.177171   65592 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1001 20:21:37.177231   65592 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:21:37.225301   65592 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1001 20:21:37.225366   65592 ssh_runner.go:195] Run: which lz4
	I1001 20:21:37.230413   65592 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 20:21:37.234994   65592 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 20:21:37.235032   65592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1001 20:21:38.799541   65592 crio.go:462] duration metric: took 1.569175029s to copy over tarball
	I1001 20:21:38.799623   65592 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 20:21:41.897928   65592 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.09827224s)
	I1001 20:21:41.897963   65592 crio.go:469] duration metric: took 3.098389074s to extract the tarball
	I1001 20:21:41.897975   65592 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 20:21:41.941739   65592 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:21:41.975150   65592 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1001 20:21:41.975175   65592 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1001 20:21:41.975283   65592 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:21:41.975288   65592 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 20:21:41.975310   65592 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1001 20:21:41.975325   65592 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1001 20:21:41.975395   65592 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1001 20:21:41.975288   65592 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1001 20:21:41.975433   65592 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1001 20:21:41.975486   65592 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1001 20:21:41.977731   65592 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 20:21:41.977802   65592 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1001 20:21:41.977842   65592 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1001 20:21:41.977884   65592 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:21:41.977726   65592 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1001 20:21:41.978225   65592 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1001 20:21:41.978281   65592 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1001 20:21:41.978517   65592 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1001 20:21:42.246298   65592 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1001 20:21:42.261614   65592 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 20:21:42.295536   65592 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1001 20:21:42.295587   65592 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1001 20:21:42.295640   65592 ssh_runner.go:195] Run: which crictl
	I1001 20:21:42.321991   65592 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1001 20:21:42.328689   65592 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1001 20:21:42.328812   65592 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 20:21:42.328756   65592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1001 20:21:42.328874   65592 ssh_runner.go:195] Run: which crictl
	I1001 20:21:42.337574   65592 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1001 20:21:42.340591   65592 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1001 20:21:42.344196   65592 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1001 20:21:42.357325   65592 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1001 20:21:42.433608   65592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1001 20:21:42.433632   65592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 20:21:42.433650   65592 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1001 20:21:42.433780   65592 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1001 20:21:42.433809   65592 ssh_runner.go:195] Run: which crictl
	I1001 20:21:42.520759   65592 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1001 20:21:42.520804   65592 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1001 20:21:42.520835   65592 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1001 20:21:42.520845   65592 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1001 20:21:42.520866   65592 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1001 20:21:42.520892   65592 ssh_runner.go:195] Run: which crictl
	I1001 20:21:42.520894   65592 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1001 20:21:42.520869   65592 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1001 20:21:42.520921   65592 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1001 20:21:42.520935   65592 ssh_runner.go:195] Run: which crictl
	I1001 20:21:42.520946   65592 ssh_runner.go:195] Run: which crictl
	I1001 20:21:42.520957   65592 ssh_runner.go:195] Run: which crictl
	I1001 20:21:42.554452   65592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 20:21:42.554475   65592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1001 20:21:42.554505   65592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1001 20:21:42.554509   65592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1001 20:21:42.554566   65592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1001 20:21:42.554626   65592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1001 20:21:42.554651   65592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1001 20:21:42.694316   65592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 20:21:42.715189   65592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1001 20:21:42.715265   65592 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1001 20:21:42.715289   65592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1001 20:21:42.715195   65592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1001 20:21:42.715234   65592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1001 20:21:42.715374   65592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1001 20:21:42.806245   65592 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1001 20:21:42.824065   65592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1001 20:21:42.886239   65592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1001 20:21:42.886287   65592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1001 20:21:42.886287   65592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1001 20:21:42.886314   65592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1001 20:21:42.886358   65592 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1001 20:21:42.976472   65592 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1001 20:21:42.976545   65592 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1001 20:21:42.976514   65592 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1001 20:21:42.976599   65592 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1001 20:21:43.276573   65592 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:21:43.425303   65592 cache_images.go:92] duration metric: took 1.450111121s to LoadCachedImages
	W1001 20:21:43.425407   65592 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19736-11198/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1001 20:21:43.425434   65592 kubeadm.go:934] updating node { 192.168.72.110 8443 v1.20.0 crio true true} ...
	I1001 20:21:43.425562   65592 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-359369 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-359369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 20:21:43.425647   65592 ssh_runner.go:195] Run: crio config
	I1001 20:21:43.478526   65592 cni.go:84] Creating CNI manager for ""
	I1001 20:21:43.478552   65592 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:21:43.478564   65592 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 20:21:43.478581   65592 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.110 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-359369 NodeName:old-k8s-version-359369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1001 20:21:43.478714   65592 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-359369"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.110
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.110"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 20:21:43.478783   65592 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1001 20:21:43.489520   65592 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 20:21:43.489600   65592 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 20:21:43.499252   65592 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1001 20:21:43.517052   65592 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 20:21:43.534431   65592 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1001 20:21:43.551650   65592 ssh_runner.go:195] Run: grep 192.168.72.110	control-plane.minikube.internal$ /etc/hosts
	I1001 20:21:43.555259   65592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.110	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:21:43.567767   65592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:21:43.698140   65592 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:21:43.715695   65592 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369 for IP: 192.168.72.110
	I1001 20:21:43.715719   65592 certs.go:194] generating shared ca certs ...
	I1001 20:21:43.715740   65592 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:21:43.715917   65592 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 20:21:43.715968   65592 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 20:21:43.715984   65592 certs.go:256] generating profile certs ...
	I1001 20:21:43.716116   65592 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/client.key
	I1001 20:21:43.716189   65592 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/apiserver.key.3f76c948
	I1001 20:21:43.716242   65592 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/proxy-client.key
	I1001 20:21:43.716431   65592 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 20:21:43.716483   65592 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 20:21:43.716494   65592 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 20:21:43.716529   65592 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 20:21:43.716562   65592 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 20:21:43.716593   65592 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 20:21:43.716645   65592 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:21:43.717418   65592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 20:21:43.783954   65592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 20:21:43.818455   65592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 20:21:43.855232   65592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 20:21:43.886039   65592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1001 20:21:43.925088   65592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 20:21:43.964945   65592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 20:21:43.990956   65592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1001 20:21:44.015820   65592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 20:21:44.044480   65592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 20:21:44.071980   65592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 20:21:44.097569   65592 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 20:21:44.115900   65592 ssh_runner.go:195] Run: openssl version
	I1001 20:21:44.123503   65592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 20:21:44.137923   65592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:21:44.143022   65592 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:21:44.143095   65592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:21:44.149032   65592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 20:21:44.159838   65592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 20:21:44.171571   65592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 20:21:44.176590   65592 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 20:21:44.176659   65592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 20:21:44.182536   65592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 20:21:44.193486   65592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 20:21:44.203978   65592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 20:21:44.208885   65592 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 20:21:44.208951   65592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 20:21:44.214851   65592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 20:21:44.229199   65592 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 20:21:44.234889   65592 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 20:21:44.242236   65592 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 20:21:44.248014   65592 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 20:21:44.255027   65592 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 20:21:44.260969   65592 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 20:21:44.266853   65592 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 20:21:44.272890   65592 kubeadm.go:392] StartCluster: {Name:old-k8s-version-359369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.20.0 ClusterName:old-k8s-version-359369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.110 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:21:44.272981   65592 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 20:21:44.273040   65592 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 20:21:44.324345   65592 cri.go:89] found id: ""
	I1001 20:21:44.324458   65592 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 20:21:44.335091   65592 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1001 20:21:44.335112   65592 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1001 20:21:44.335175   65592 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1001 20:21:44.345379   65592 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1001 20:21:44.346626   65592 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-359369" does not appear in /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:21:44.347713   65592 kubeconfig.go:62] /home/jenkins/minikube-integration/19736-11198/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-359369" cluster setting kubeconfig missing "old-k8s-version-359369" context setting]
	I1001 20:21:44.349059   65592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/kubeconfig: {Name:mk7ef19d546928001abf2478a3abe1d17765a591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:21:44.449227   65592 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1001 20:21:44.460918   65592 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.110
	I1001 20:21:44.460958   65592 kubeadm.go:1160] stopping kube-system containers ...
	I1001 20:21:44.460975   65592 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1001 20:21:44.461028   65592 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 20:21:44.507424   65592 cri.go:89] found id: ""
	I1001 20:21:44.507503   65592 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1001 20:21:44.526197   65592 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:21:44.537656   65592 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:21:44.537679   65592 kubeadm.go:157] found existing configuration files:
	
	I1001 20:21:44.537729   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:21:44.550066   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:21:44.550140   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:21:44.563037   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:21:44.575679   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:21:44.575747   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:21:44.586581   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:21:44.597091   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:21:44.597162   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:21:44.607342   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:21:44.617141   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:21:44.617228   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:21:44.628050   65592 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:21:44.638247   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:21:44.771081   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:21:45.622037   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:21:45.848254   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:21:45.963379   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:21:46.056228   65592 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:21:46.056342   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:46.557138   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:47.056921   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:47.557377   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:48.056511   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:48.557321   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:49.057216   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:49.556464   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:50.057122   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:50.556553   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:51.057151   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:51.557096   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:52.057118   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:52.557208   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:53.056478   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:53.556508   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:54.057271   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:54.556635   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:55.056520   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:55.557222   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:56.057390   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:56.556874   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:57.057268   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:57.556514   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:58.056506   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:58.556959   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:59.057230   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:21:59.556925   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:00.057124   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:00.557035   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:01.057202   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:01.556510   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:02.057322   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:02.557118   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:03.057307   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:03.557170   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:04.057229   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:04.556955   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:05.057170   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:05.556552   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:06.056491   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:06.556500   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:07.057122   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:07.556763   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:08.057236   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:08.557345   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:09.057000   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:09.556504   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:10.057038   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:10.556506   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:11.057197   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:11.556525   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:12.056879   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:12.556794   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:13.057220   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:13.556552   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:14.056532   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:14.557096   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:15.056483   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:15.556513   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:16.057049   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:16.556803   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:17.056798   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:17.557030   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:18.057085   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:18.557160   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:19.056984   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:19.557057   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:20.056556   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:20.557048   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:21.056532   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:21.556521   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:22.057210   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:22.556453   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:23.057274   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:23.556617   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:24.057060   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:24.556900   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:25.057297   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:25.556517   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:26.057070   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:26.556480   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:27.057013   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:27.556804   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:28.056629   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:28.556887   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:29.056927   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:29.556600   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:30.057218   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:30.557348   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:31.056549   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:31.556720   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:32.057212   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:32.556818   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:33.057023   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:33.557114   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:34.056498   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:34.556709   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:35.057095   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:35.557133   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:36.056851   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:36.556477   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:37.056674   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:37.557447   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:38.056506   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:38.556565   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:39.057084   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:39.557318   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:40.056390   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:40.556860   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:41.057229   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:41.556463   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:42.056499   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:42.557067   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:43.056596   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:43.557199   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:44.057211   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:44.556720   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:45.056534   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:45.557252   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:46.056478   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:22:46.056592   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:22:46.096488   65592 cri.go:89] found id: ""
	I1001 20:22:46.096523   65592 logs.go:276] 0 containers: []
	W1001 20:22:46.096533   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:22:46.096539   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:22:46.096609   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:22:46.129869   65592 cri.go:89] found id: ""
	I1001 20:22:46.129902   65592 logs.go:276] 0 containers: []
	W1001 20:22:46.129913   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:22:46.129920   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:22:46.129981   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:22:46.164052   65592 cri.go:89] found id: ""
	I1001 20:22:46.164084   65592 logs.go:276] 0 containers: []
	W1001 20:22:46.164093   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:22:46.164098   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:22:46.164148   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:22:46.196683   65592 cri.go:89] found id: ""
	I1001 20:22:46.196722   65592 logs.go:276] 0 containers: []
	W1001 20:22:46.196735   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:22:46.196744   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:22:46.196798   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:22:46.230424   65592 cri.go:89] found id: ""
	I1001 20:22:46.230456   65592 logs.go:276] 0 containers: []
	W1001 20:22:46.230468   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:22:46.230487   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:22:46.230553   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:22:46.264050   65592 cri.go:89] found id: ""
	I1001 20:22:46.264076   65592 logs.go:276] 0 containers: []
	W1001 20:22:46.264083   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:22:46.264097   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:22:46.264161   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:22:46.308337   65592 cri.go:89] found id: ""
	I1001 20:22:46.308394   65592 logs.go:276] 0 containers: []
	W1001 20:22:46.308407   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:22:46.308425   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:22:46.308495   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:22:46.345497   65592 cri.go:89] found id: ""
	I1001 20:22:46.345526   65592 logs.go:276] 0 containers: []
	W1001 20:22:46.345536   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:22:46.345547   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:22:46.345560   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:22:46.399481   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:22:46.399518   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:22:46.414525   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:22:46.414559   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:22:46.538955   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:22:46.538982   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:22:46.538998   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:22:46.608094   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:22:46.608127   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:22:49.149399   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:49.165812   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:22:49.165882   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:22:49.219153   65592 cri.go:89] found id: ""
	I1001 20:22:49.219184   65592 logs.go:276] 0 containers: []
	W1001 20:22:49.219197   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:22:49.219204   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:22:49.219263   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:22:49.252807   65592 cri.go:89] found id: ""
	I1001 20:22:49.252831   65592 logs.go:276] 0 containers: []
	W1001 20:22:49.252839   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:22:49.252844   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:22:49.252887   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:22:49.286528   65592 cri.go:89] found id: ""
	I1001 20:22:49.286554   65592 logs.go:276] 0 containers: []
	W1001 20:22:49.286562   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:22:49.286568   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:22:49.286613   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:22:49.319172   65592 cri.go:89] found id: ""
	I1001 20:22:49.319202   65592 logs.go:276] 0 containers: []
	W1001 20:22:49.319217   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:22:49.319223   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:22:49.319273   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:22:49.351810   65592 cri.go:89] found id: ""
	I1001 20:22:49.351845   65592 logs.go:276] 0 containers: []
	W1001 20:22:49.351854   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:22:49.351860   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:22:49.351915   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:22:49.384655   65592 cri.go:89] found id: ""
	I1001 20:22:49.384688   65592 logs.go:276] 0 containers: []
	W1001 20:22:49.384699   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:22:49.384708   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:22:49.384770   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:22:49.421573   65592 cri.go:89] found id: ""
	I1001 20:22:49.421601   65592 logs.go:276] 0 containers: []
	W1001 20:22:49.421609   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:22:49.421615   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:22:49.421668   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:22:49.458187   65592 cri.go:89] found id: ""
	I1001 20:22:49.458218   65592 logs.go:276] 0 containers: []
	W1001 20:22:49.458229   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:22:49.458240   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:22:49.458253   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:22:49.531501   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:22:49.531524   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:22:49.531536   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:22:49.610294   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:22:49.610332   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:22:49.650811   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:22:49.650847   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:22:49.700752   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:22:49.700790   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:22:52.214823   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:52.228316   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:22:52.228405   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:22:52.267807   65592 cri.go:89] found id: ""
	I1001 20:22:52.267834   65592 logs.go:276] 0 containers: []
	W1001 20:22:52.267842   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:22:52.267847   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:22:52.267909   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:22:52.300459   65592 cri.go:89] found id: ""
	I1001 20:22:52.300494   65592 logs.go:276] 0 containers: []
	W1001 20:22:52.300505   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:22:52.300513   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:22:52.300569   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:22:52.332940   65592 cri.go:89] found id: ""
	I1001 20:22:52.332970   65592 logs.go:276] 0 containers: []
	W1001 20:22:52.332979   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:22:52.332984   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:22:52.333041   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:22:52.365301   65592 cri.go:89] found id: ""
	I1001 20:22:52.365333   65592 logs.go:276] 0 containers: []
	W1001 20:22:52.365343   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:22:52.365352   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:22:52.365425   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:22:52.398507   65592 cri.go:89] found id: ""
	I1001 20:22:52.398535   65592 logs.go:276] 0 containers: []
	W1001 20:22:52.398544   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:22:52.398550   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:22:52.398620   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:22:52.432072   65592 cri.go:89] found id: ""
	I1001 20:22:52.432102   65592 logs.go:276] 0 containers: []
	W1001 20:22:52.432111   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:22:52.432117   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:22:52.432177   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:22:52.464408   65592 cri.go:89] found id: ""
	I1001 20:22:52.464438   65592 logs.go:276] 0 containers: []
	W1001 20:22:52.464448   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:22:52.464453   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:22:52.464519   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:22:52.496639   65592 cri.go:89] found id: ""
	I1001 20:22:52.496671   65592 logs.go:276] 0 containers: []
	W1001 20:22:52.496683   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:22:52.496693   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:22:52.496706   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:22:52.534175   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:22:52.534203   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:22:52.585949   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:22:52.585982   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:22:52.598931   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:22:52.598961   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:22:52.673207   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:22:52.673233   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:22:52.673247   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:22:55.249882   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:55.265149   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:22:55.265227   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:22:55.300376   65592 cri.go:89] found id: ""
	I1001 20:22:55.300407   65592 logs.go:276] 0 containers: []
	W1001 20:22:55.300418   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:22:55.300424   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:22:55.300473   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:22:55.334878   65592 cri.go:89] found id: ""
	I1001 20:22:55.334914   65592 logs.go:276] 0 containers: []
	W1001 20:22:55.334927   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:22:55.334939   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:22:55.335002   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:22:55.366531   65592 cri.go:89] found id: ""
	I1001 20:22:55.366560   65592 logs.go:276] 0 containers: []
	W1001 20:22:55.366567   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:22:55.366573   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:22:55.366629   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:22:55.402431   65592 cri.go:89] found id: ""
	I1001 20:22:55.402468   65592 logs.go:276] 0 containers: []
	W1001 20:22:55.402477   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:22:55.402483   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:22:55.402539   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:22:55.435823   65592 cri.go:89] found id: ""
	I1001 20:22:55.435855   65592 logs.go:276] 0 containers: []
	W1001 20:22:55.435865   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:22:55.435871   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:22:55.435919   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:22:55.469207   65592 cri.go:89] found id: ""
	I1001 20:22:55.469237   65592 logs.go:276] 0 containers: []
	W1001 20:22:55.469246   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:22:55.469252   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:22:55.469314   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:22:55.500998   65592 cri.go:89] found id: ""
	I1001 20:22:55.501024   65592 logs.go:276] 0 containers: []
	W1001 20:22:55.501034   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:22:55.501039   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:22:55.501088   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:22:55.535011   65592 cri.go:89] found id: ""
	I1001 20:22:55.535046   65592 logs.go:276] 0 containers: []
	W1001 20:22:55.535059   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:22:55.535069   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:22:55.535082   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:22:55.547483   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:22:55.547515   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:22:55.615003   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:22:55.615095   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:22:55.615120   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:22:55.694885   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:22:55.694930   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:22:55.729379   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:22:55.729411   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:22:58.282772   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:22:58.296465   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:22:58.296538   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:22:58.329063   65592 cri.go:89] found id: ""
	I1001 20:22:58.329094   65592 logs.go:276] 0 containers: []
	W1001 20:22:58.329103   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:22:58.329109   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:22:58.329162   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:22:58.364207   65592 cri.go:89] found id: ""
	I1001 20:22:58.364237   65592 logs.go:276] 0 containers: []
	W1001 20:22:58.364248   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:22:58.364256   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:22:58.364309   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:22:58.396985   65592 cri.go:89] found id: ""
	I1001 20:22:58.397016   65592 logs.go:276] 0 containers: []
	W1001 20:22:58.397026   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:22:58.397034   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:22:58.397095   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:22:58.429765   65592 cri.go:89] found id: ""
	I1001 20:22:58.429791   65592 logs.go:276] 0 containers: []
	W1001 20:22:58.429802   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:22:58.429807   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:22:58.429863   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:22:58.467812   65592 cri.go:89] found id: ""
	I1001 20:22:58.467841   65592 logs.go:276] 0 containers: []
	W1001 20:22:58.467853   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:22:58.467858   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:22:58.467907   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:22:58.504546   65592 cri.go:89] found id: ""
	I1001 20:22:58.504574   65592 logs.go:276] 0 containers: []
	W1001 20:22:58.504585   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:22:58.504594   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:22:58.504650   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:22:58.537531   65592 cri.go:89] found id: ""
	I1001 20:22:58.537564   65592 logs.go:276] 0 containers: []
	W1001 20:22:58.537576   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:22:58.537582   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:22:58.537640   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:22:58.575781   65592 cri.go:89] found id: ""
	I1001 20:22:58.575806   65592 logs.go:276] 0 containers: []
	W1001 20:22:58.575813   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:22:58.575822   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:22:58.575832   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:22:58.613015   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:22:58.613053   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:22:58.664382   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:22:58.664424   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:22:58.677450   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:22:58.677484   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:22:58.757112   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:22:58.757137   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:22:58.757151   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:23:01.333426   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:23:01.348282   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:23:01.348350   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:23:01.383067   65592 cri.go:89] found id: ""
	I1001 20:23:01.383096   65592 logs.go:276] 0 containers: []
	W1001 20:23:01.383104   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:23:01.383109   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:23:01.383168   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:23:01.419142   65592 cri.go:89] found id: ""
	I1001 20:23:01.419173   65592 logs.go:276] 0 containers: []
	W1001 20:23:01.419181   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:23:01.419186   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:23:01.419234   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:23:01.451420   65592 cri.go:89] found id: ""
	I1001 20:23:01.451450   65592 logs.go:276] 0 containers: []
	W1001 20:23:01.451461   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:23:01.451468   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:23:01.451534   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:23:01.486443   65592 cri.go:89] found id: ""
	I1001 20:23:01.486482   65592 logs.go:276] 0 containers: []
	W1001 20:23:01.486491   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:23:01.486498   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:23:01.486561   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:23:01.521353   65592 cri.go:89] found id: ""
	I1001 20:23:01.521383   65592 logs.go:276] 0 containers: []
	W1001 20:23:01.521394   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:23:01.521405   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:23:01.521478   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:23:01.553717   65592 cri.go:89] found id: ""
	I1001 20:23:01.553748   65592 logs.go:276] 0 containers: []
	W1001 20:23:01.553758   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:23:01.553765   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:23:01.553817   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:23:01.592639   65592 cri.go:89] found id: ""
	I1001 20:23:01.592669   65592 logs.go:276] 0 containers: []
	W1001 20:23:01.592677   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:23:01.592683   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:23:01.592732   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:23:01.626777   65592 cri.go:89] found id: ""
	I1001 20:23:01.626808   65592 logs.go:276] 0 containers: []
	W1001 20:23:01.626820   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:23:01.626832   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:23:01.626856   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:23:01.639290   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:23:01.639316   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:23:01.717094   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:23:01.717116   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:23:01.717127   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:23:01.798435   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:23:01.798472   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:23:01.836563   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:23:01.836593   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:23:04.387852   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:23:04.400595   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:23:04.400669   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:23:04.432774   65592 cri.go:89] found id: ""
	I1001 20:23:04.432808   65592 logs.go:276] 0 containers: []
	W1001 20:23:04.432818   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:23:04.432826   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:23:04.432885   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:23:04.465476   65592 cri.go:89] found id: ""
	I1001 20:23:04.465502   65592 logs.go:276] 0 containers: []
	W1001 20:23:04.465510   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:23:04.465522   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:23:04.465582   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:23:04.501458   65592 cri.go:89] found id: ""
	I1001 20:23:04.501484   65592 logs.go:276] 0 containers: []
	W1001 20:23:04.501492   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:23:04.501497   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:23:04.501546   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:23:04.535841   65592 cri.go:89] found id: ""
	I1001 20:23:04.535872   65592 logs.go:276] 0 containers: []
	W1001 20:23:04.535883   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:23:04.535890   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:23:04.535953   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:23:04.569431   65592 cri.go:89] found id: ""
	I1001 20:23:04.569462   65592 logs.go:276] 0 containers: []
	W1001 20:23:04.569475   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:23:04.569484   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:23:04.569555   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:23:04.602644   65592 cri.go:89] found id: ""
	I1001 20:23:04.602676   65592 logs.go:276] 0 containers: []
	W1001 20:23:04.602685   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:23:04.602692   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:23:04.602752   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:23:04.635798   65592 cri.go:89] found id: ""
	I1001 20:23:04.635822   65592 logs.go:276] 0 containers: []
	W1001 20:23:04.635830   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:23:04.635835   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:23:04.635889   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:23:04.669885   65592 cri.go:89] found id: ""
	I1001 20:23:04.669920   65592 logs.go:276] 0 containers: []
	W1001 20:23:04.669932   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:23:04.669944   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:23:04.669956   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:23:04.723936   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:23:04.723972   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:23:04.737368   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:23:04.737405   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:23:04.807880   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:23:04.807907   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:23:04.807923   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:23:04.882848   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:23:04.882900   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:23:07.425109   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:23:07.439241   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:23:07.439303   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:23:07.475662   65592 cri.go:89] found id: ""
	I1001 20:23:07.475700   65592 logs.go:276] 0 containers: []
	W1001 20:23:07.475710   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:23:07.475717   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:23:07.475777   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:23:07.510347   65592 cri.go:89] found id: ""
	I1001 20:23:07.510378   65592 logs.go:276] 0 containers: []
	W1001 20:23:07.510388   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:23:07.510396   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:23:07.510466   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:23:07.543873   65592 cri.go:89] found id: ""
	I1001 20:23:07.543900   65592 logs.go:276] 0 containers: []
	W1001 20:23:07.543907   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:23:07.543916   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:23:07.543973   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:23:07.576945   65592 cri.go:89] found id: ""
	I1001 20:23:07.576984   65592 logs.go:276] 0 containers: []
	W1001 20:23:07.576997   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:23:07.577007   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:23:07.577067   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:23:07.609986   65592 cri.go:89] found id: ""
	I1001 20:23:07.610016   65592 logs.go:276] 0 containers: []
	W1001 20:23:07.610028   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:23:07.610035   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:23:07.610104   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:23:07.646475   65592 cri.go:89] found id: ""
	I1001 20:23:07.646508   65592 logs.go:276] 0 containers: []
	W1001 20:23:07.646517   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:23:07.646522   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:23:07.646585   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:23:07.681543   65592 cri.go:89] found id: ""
	I1001 20:23:07.681572   65592 logs.go:276] 0 containers: []
	W1001 20:23:07.681584   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:23:07.681591   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:23:07.681650   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:23:07.719532   65592 cri.go:89] found id: ""
	I1001 20:23:07.719562   65592 logs.go:276] 0 containers: []
	W1001 20:23:07.719570   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:23:07.719581   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:23:07.719591   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:23:07.775093   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:23:07.775129   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:23:07.788831   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:23:07.788860   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:23:07.856271   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:23:07.856295   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:23:07.856309   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:23:07.947551   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:23:07.947598   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:23:10.488496   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:23:10.502254   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:23:10.502320   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:23:10.535662   65592 cri.go:89] found id: ""
	I1001 20:23:10.535689   65592 logs.go:276] 0 containers: []
	W1001 20:23:10.535698   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:23:10.535705   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:23:10.535754   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:23:10.568697   65592 cri.go:89] found id: ""
	I1001 20:23:10.568725   65592 logs.go:276] 0 containers: []
	W1001 20:23:10.568734   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:23:10.568740   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:23:10.568796   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:23:10.605709   65592 cri.go:89] found id: ""
	I1001 20:23:10.605741   65592 logs.go:276] 0 containers: []
	W1001 20:23:10.605752   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:23:10.605759   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:23:10.605821   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:23:10.639609   65592 cri.go:89] found id: ""
	I1001 20:23:10.639635   65592 logs.go:276] 0 containers: []
	W1001 20:23:10.639644   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:23:10.639652   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:23:10.639719   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:23:10.674513   65592 cri.go:89] found id: ""
	I1001 20:23:10.674552   65592 logs.go:276] 0 containers: []
	W1001 20:23:10.674564   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:23:10.674572   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:23:10.675028   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:23:10.712344   65592 cri.go:89] found id: ""
	I1001 20:23:10.712395   65592 logs.go:276] 0 containers: []
	W1001 20:23:10.712406   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:23:10.712423   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:23:10.712487   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:23:10.746210   65592 cri.go:89] found id: ""
	I1001 20:23:10.746238   65592 logs.go:276] 0 containers: []
	W1001 20:23:10.746246   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:23:10.746251   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:23:10.746298   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:23:10.780930   65592 cri.go:89] found id: ""
	I1001 20:23:10.780956   65592 logs.go:276] 0 containers: []
	W1001 20:23:10.780968   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:23:10.780976   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:23:10.780988   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:23:10.837489   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:23:10.837531   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:23:10.851064   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:23:10.851098   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:23:10.922272   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:23:10.922310   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:23:10.922327   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:23:10.996764   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:23:10.996801   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:23:13.536989   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:23:13.549896   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:23:13.549992   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:23:13.583251   65592 cri.go:89] found id: ""
	I1001 20:23:13.583280   65592 logs.go:276] 0 containers: []
	W1001 20:23:13.583288   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:23:13.583293   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:23:13.583341   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:23:13.616724   65592 cri.go:89] found id: ""
	I1001 20:23:13.616772   65592 logs.go:276] 0 containers: []
	W1001 20:23:13.616786   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:23:13.616797   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:23:13.616922   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:23:13.652153   65592 cri.go:89] found id: ""
	I1001 20:23:13.652179   65592 logs.go:276] 0 containers: []
	W1001 20:23:13.652186   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:23:13.652191   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:23:13.652236   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:23:13.686670   65592 cri.go:89] found id: ""
	I1001 20:23:13.686696   65592 logs.go:276] 0 containers: []
	W1001 20:23:13.686706   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:23:13.686713   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:23:13.686800   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:23:13.722209   65592 cri.go:89] found id: ""
	I1001 20:23:13.722242   65592 logs.go:276] 0 containers: []
	W1001 20:23:13.722251   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:23:13.722258   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:23:13.722314   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:23:13.755784   65592 cri.go:89] found id: ""
	I1001 20:23:13.755814   65592 logs.go:276] 0 containers: []
	W1001 20:23:13.755826   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:23:13.755833   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:23:13.755885   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:23:13.796854   65592 cri.go:89] found id: ""
	I1001 20:23:13.796879   65592 logs.go:276] 0 containers: []
	W1001 20:23:13.796886   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:23:13.796892   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:23:13.796946   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:23:13.831635   65592 cri.go:89] found id: ""
	I1001 20:23:13.831666   65592 logs.go:276] 0 containers: []
	W1001 20:23:13.831676   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:23:13.831687   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:23:13.831702   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:23:13.883074   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:23:13.883118   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:23:13.897346   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:23:13.897374   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:23:13.968408   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:23:13.968438   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:23:13.968452   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:23:14.047040   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:23:14.047076   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:23:16.585621   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:23:16.600003   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:23:16.600067   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:23:16.640031   65592 cri.go:89] found id: ""
	I1001 20:23:16.640063   65592 logs.go:276] 0 containers: []
	W1001 20:23:16.640072   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:23:16.640078   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:23:16.640135   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:23:16.679044   65592 cri.go:89] found id: ""
	I1001 20:23:16.679076   65592 logs.go:276] 0 containers: []
	W1001 20:23:16.679086   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:23:16.679093   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:23:16.679151   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:23:16.726549   65592 cri.go:89] found id: ""
	I1001 20:23:16.726583   65592 logs.go:276] 0 containers: []
	W1001 20:23:16.726593   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:23:16.726600   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:23:16.726671   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:23:16.769362   65592 cri.go:89] found id: ""
	I1001 20:23:16.769392   65592 logs.go:276] 0 containers: []
	W1001 20:23:16.769402   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:23:16.769410   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:23:16.769475   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:23:16.804483   65592 cri.go:89] found id: ""
	I1001 20:23:16.804515   65592 logs.go:276] 0 containers: []
	W1001 20:23:16.804526   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:23:16.804533   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:23:16.804611   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:23:16.842842   65592 cri.go:89] found id: ""
	I1001 20:23:16.842868   65592 logs.go:276] 0 containers: []
	W1001 20:23:16.842876   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:23:16.842882   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:23:16.842934   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:23:16.878614   65592 cri.go:89] found id: ""
	I1001 20:23:16.878641   65592 logs.go:276] 0 containers: []
	W1001 20:23:16.878648   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:23:16.878654   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:23:16.878700   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:23:16.910961   65592 cri.go:89] found id: ""
	I1001 20:23:16.910991   65592 logs.go:276] 0 containers: []
	W1001 20:23:16.911002   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:23:16.911011   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:23:16.911022   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:23:16.952035   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:23:16.952063   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:23:17.004635   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:23:17.004673   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:23:17.018612   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:23:17.018639   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:23:17.086379   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:23:17.086409   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:23:17.086422   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:23:19.668433   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:23:19.682684   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:23:19.682756   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:23:19.720135   65592 cri.go:89] found id: ""
	I1001 20:23:19.720159   65592 logs.go:276] 0 containers: []
	W1001 20:23:19.720167   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:23:19.720173   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:23:19.720235   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:23:19.753067   65592 cri.go:89] found id: ""
	I1001 20:23:19.753109   65592 logs.go:276] 0 containers: []
	W1001 20:23:19.753125   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:23:19.753135   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:23:19.753217   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:23:19.787155   65592 cri.go:89] found id: ""
	I1001 20:23:19.787181   65592 logs.go:276] 0 containers: []
	W1001 20:23:19.787193   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:23:19.787201   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:23:19.787264   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:23:19.822788   65592 cri.go:89] found id: ""
	I1001 20:23:19.822817   65592 logs.go:276] 0 containers: []
	W1001 20:23:19.822828   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:23:19.822836   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:23:19.822899   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:23:19.856625   65592 cri.go:89] found id: ""
	I1001 20:23:19.856656   65592 logs.go:276] 0 containers: []
	W1001 20:23:19.856667   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:23:19.856674   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:23:19.856742   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:23:19.890927   65592 cri.go:89] found id: ""
	I1001 20:23:19.890968   65592 logs.go:276] 0 containers: []
	W1001 20:23:19.890981   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:23:19.890988   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:23:19.891048   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:23:19.924629   65592 cri.go:89] found id: ""
	I1001 20:23:19.924668   65592 logs.go:276] 0 containers: []
	W1001 20:23:19.924678   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:23:19.924685   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:23:19.924744   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:23:19.969369   65592 cri.go:89] found id: ""
	I1001 20:23:19.969399   65592 logs.go:276] 0 containers: []
	W1001 20:23:19.969409   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:23:19.969420   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:23:19.969434   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:23:20.050970   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:23:20.051008   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:23:20.092637   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:23:20.092676   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:23:20.145997   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:23:20.146039   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:23:20.159904   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:23:20.159933   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:23:20.237109   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:23:22.738234   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:23:22.750842   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:23:22.750922   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:23:22.786777   65592 cri.go:89] found id: ""
	I1001 20:23:22.786815   65592 logs.go:276] 0 containers: []
	W1001 20:23:22.786826   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:23:22.786834   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:23:22.786899   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:23:22.822147   65592 cri.go:89] found id: ""
	I1001 20:23:22.822180   65592 logs.go:276] 0 containers: []
	W1001 20:23:22.822192   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:23:22.822199   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:23:22.822260   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:23:22.856272   65592 cri.go:89] found id: ""
	I1001 20:23:22.856308   65592 logs.go:276] 0 containers: []
	W1001 20:23:22.856320   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:23:22.856327   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:23:22.856409   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:23:22.890863   65592 cri.go:89] found id: ""
	I1001 20:23:22.890891   65592 logs.go:276] 0 containers: []
	W1001 20:23:22.890900   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:23:22.890906   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:23:22.890970   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:23:22.929208   65592 cri.go:89] found id: ""
	I1001 20:23:22.929240   65592 logs.go:276] 0 containers: []
	W1001 20:23:22.929249   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:23:22.929255   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:23:22.929312   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:23:22.964121   65592 cri.go:89] found id: ""
	I1001 20:23:22.964150   65592 logs.go:276] 0 containers: []
	W1001 20:23:22.964160   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:23:22.964169   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:23:22.964229   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:23:23.019323   65592 cri.go:89] found id: ""
	I1001 20:23:23.019355   65592 logs.go:276] 0 containers: []
	W1001 20:23:23.019366   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:23:23.019374   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:23:23.019441   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:23:23.053129   65592 cri.go:89] found id: ""
	I1001 20:23:23.053162   65592 logs.go:276] 0 containers: []
	W1001 20:23:23.053172   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:23:23.053183   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:23:23.053202   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:23:23.105245   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:23:23.105286   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:23:23.121080   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:23:23.121115   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:23:23.192709   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:23:23.192736   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:23:23.192748   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:23:23.273415   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:23:23.273455   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:23:25.813517   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:23:25.828393   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:23:25.828467   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:23:25.861501   65592 cri.go:89] found id: ""
	I1001 20:23:25.861534   65592 logs.go:276] 0 containers: []
	W1001 20:23:25.861548   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:23:25.861556   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:23:25.861614   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:23:25.894478   65592 cri.go:89] found id: ""
	I1001 20:23:25.894510   65592 logs.go:276] 0 containers: []
	W1001 20:23:25.894520   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:23:25.894527   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:23:25.894576   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:23:25.927318   65592 cri.go:89] found id: ""
	I1001 20:23:25.927346   65592 logs.go:276] 0 containers: []
	W1001 20:23:25.927357   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:23:25.927365   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:23:25.927429   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:23:25.962497   65592 cri.go:89] found id: ""
	I1001 20:23:25.962528   65592 logs.go:276] 0 containers: []
	W1001 20:23:25.962539   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:23:25.962546   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:23:25.962607   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:23:25.997266   65592 cri.go:89] found id: ""
	I1001 20:23:25.997299   65592 logs.go:276] 0 containers: []
	W1001 20:23:25.997310   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:23:25.997318   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:23:25.997379   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:23:26.037111   65592 cri.go:89] found id: ""
	I1001 20:23:26.037136   65592 logs.go:276] 0 containers: []
	W1001 20:23:26.037144   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:23:26.037150   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:23:26.037206   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:23:26.076257   65592 cri.go:89] found id: ""
	I1001 20:23:26.076284   65592 logs.go:276] 0 containers: []
	W1001 20:23:26.076297   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:23:26.076302   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:23:26.076353   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:23:26.110336   65592 cri.go:89] found id: ""
	I1001 20:23:26.110366   65592 logs.go:276] 0 containers: []
	W1001 20:23:26.110378   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:23:26.110389   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:23:26.110402   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:23:26.164461   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:23:26.164498   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:23:26.178102   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:23:26.178131   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:23:26.257185   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:23:26.257211   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:23:26.257226   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:23:26.336696   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:23:26.336733   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:23:28.877526   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:23:28.890889   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:23:28.890973   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:23:28.929271   65592 cri.go:89] found id: ""
	I1001 20:23:28.929301   65592 logs.go:276] 0 containers: []
	W1001 20:23:28.929313   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:23:28.929320   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:23:28.929379   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:23:28.967063   65592 cri.go:89] found id: ""
	I1001 20:23:28.967091   65592 logs.go:276] 0 containers: []
	W1001 20:23:28.967099   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:23:28.967104   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:23:28.967154   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:23:29.005020   65592 cri.go:89] found id: ""
	I1001 20:23:29.005052   65592 logs.go:276] 0 containers: []
	W1001 20:23:29.005061   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:23:29.005067   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:23:29.005129   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:23:29.043711   65592 cri.go:89] found id: ""
	I1001 20:23:29.043735   65592 logs.go:276] 0 containers: []
	W1001 20:23:29.043744   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:23:29.043748   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:23:29.043803   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:23:29.080708   65592 cri.go:89] found id: ""
	I1001 20:23:29.080738   65592 logs.go:276] 0 containers: []
	W1001 20:23:29.080749   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:23:29.080755   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:23:29.080808   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:23:29.114114   65592 cri.go:89] found id: ""
	I1001 20:23:29.114146   65592 logs.go:276] 0 containers: []
	W1001 20:23:29.114156   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:23:29.114164   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:23:29.114223   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:23:29.147242   65592 cri.go:89] found id: ""
	I1001 20:23:29.147269   65592 logs.go:276] 0 containers: []
	W1001 20:23:29.147276   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:23:29.147282   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:23:29.147329   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:23:29.182434   65592 cri.go:89] found id: ""
	I1001 20:23:29.182464   65592 logs.go:276] 0 containers: []
	W1001 20:23:29.182473   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:23:29.182481   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:23:29.182493   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:23:29.233054   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:23:29.233098   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:23:29.247036   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:23:29.247069   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:23:29.311417   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:23:29.311445   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:23:29.311458   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:23:29.401241   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:23:29.401289   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:23:31.954034   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:23:31.967401   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:23:31.967474   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:23:32.000527   65592 cri.go:89] found id: ""
	I1001 20:23:32.000563   65592 logs.go:276] 0 containers: []
	W1001 20:23:32.000575   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:23:32.000591   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:23:32.000665   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:23:32.035379   65592 cri.go:89] found id: ""
	I1001 20:23:32.035408   65592 logs.go:276] 0 containers: []
	W1001 20:23:32.035415   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:23:32.035420   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:23:32.035469   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:23:32.070320   65592 cri.go:89] found id: ""
	I1001 20:23:32.070355   65592 logs.go:276] 0 containers: []
	W1001 20:23:32.070367   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:23:32.070376   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:23:32.070446   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:23:32.102608   65592 cri.go:89] found id: ""
	I1001 20:23:32.102641   65592 logs.go:276] 0 containers: []
	W1001 20:23:32.102652   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:23:32.102659   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:23:32.102719   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:23:32.142264   65592 cri.go:89] found id: ""
	I1001 20:23:32.142293   65592 logs.go:276] 0 containers: []
	W1001 20:23:32.142306   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:23:32.142312   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:23:32.142365   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:23:32.173500   65592 cri.go:89] found id: ""
	I1001 20:23:32.173525   65592 logs.go:276] 0 containers: []
	W1001 20:23:32.173533   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:23:32.173539   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:23:32.173585   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:23:32.209742   65592 cri.go:89] found id: ""
	I1001 20:23:32.209781   65592 logs.go:276] 0 containers: []
	W1001 20:23:32.209789   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:23:32.209796   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:23:32.209859   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:23:32.248972   65592 cri.go:89] found id: ""
	I1001 20:23:32.249007   65592 logs.go:276] 0 containers: []
	W1001 20:23:32.249020   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:23:32.249030   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:23:32.249043   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:23:32.314696   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:23:32.314732   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:23:32.314747   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:23:32.393448   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:23:32.393484   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:23:32.430424   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:23:32.430456   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:23:32.483415   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:23:32.483453   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:23:34.997426   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:23:35.011279   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:23:35.011348   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:23:35.045678   65592 cri.go:89] found id: ""
	I1001 20:23:35.045706   65592 logs.go:276] 0 containers: []
	W1001 20:23:35.045715   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:23:35.045720   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:23:35.045777   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:23:35.081587   65592 cri.go:89] found id: ""
	I1001 20:23:35.081615   65592 logs.go:276] 0 containers: []
	W1001 20:23:35.081625   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:23:35.081632   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:23:35.081703   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:23:35.120078   65592 cri.go:89] found id: ""
	I1001 20:23:35.120109   65592 logs.go:276] 0 containers: []
	W1001 20:23:35.120120   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:23:35.120128   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:23:35.120184   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:23:35.158171   65592 cri.go:89] found id: ""
	I1001 20:23:35.158202   65592 logs.go:276] 0 containers: []
	W1001 20:23:35.158213   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:23:35.158221   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:23:35.158279   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:23:35.218528   65592 cri.go:89] found id: ""
	I1001 20:23:35.218554   65592 logs.go:276] 0 containers: []
	W1001 20:23:35.218565   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:23:35.218572   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:23:35.218634   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:23:35.255766   65592 cri.go:89] found id: ""
	I1001 20:23:35.255797   65592 logs.go:276] 0 containers: []
	W1001 20:23:35.255808   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:23:35.255815   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:23:35.255879   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:23:35.288458   65592 cri.go:89] found id: ""
	I1001 20:23:35.288487   65592 logs.go:276] 0 containers: []
	W1001 20:23:35.288495   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:23:35.288501   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:23:35.288551   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:23:35.320604   65592 cri.go:89] found id: ""
	I1001 20:23:35.320631   65592 logs.go:276] 0 containers: []
	W1001 20:23:35.320638   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:23:35.320647   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:23:35.320659   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:23:35.372394   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:23:35.372441   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:23:35.386712   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:23:35.386743   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:23:35.453838   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:23:35.453866   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:23:35.453885   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:23:35.528665   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:23:35.528704   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:23:38.065698   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:23:38.079763   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:23:38.079825   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:23:38.113314   65592 cri.go:89] found id: ""
	I1001 20:23:38.113343   65592 logs.go:276] 0 containers: []
	W1001 20:23:38.113353   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:23:38.113361   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:23:38.113433   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:23:38.154584   65592 cri.go:89] found id: ""
	I1001 20:23:38.154614   65592 logs.go:276] 0 containers: []
	W1001 20:23:38.154626   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:23:38.154633   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:23:38.154686   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:23:38.188780   65592 cri.go:89] found id: ""
	I1001 20:23:38.188812   65592 logs.go:276] 0 containers: []
	W1001 20:23:38.188823   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:23:38.188830   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:23:38.188895   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:23:38.225064   65592 cri.go:89] found id: ""
	I1001 20:23:38.225091   65592 logs.go:276] 0 containers: []
	W1001 20:23:38.225103   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:23:38.225109   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:23:38.225158   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:23:38.258176   65592 cri.go:89] found id: ""
	I1001 20:23:38.258206   65592 logs.go:276] 0 containers: []
	W1001 20:23:38.258215   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:23:38.258227   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:23:38.258277   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:23:38.292269   65592 cri.go:89] found id: ""
	I1001 20:23:38.292303   65592 logs.go:276] 0 containers: []
	W1001 20:23:38.292312   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:23:38.292318   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:23:38.292393   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:23:38.326194   65592 cri.go:89] found id: ""
	I1001 20:23:38.326234   65592 logs.go:276] 0 containers: []
	W1001 20:23:38.326246   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:23:38.326254   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:23:38.326328   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:23:38.356853   65592 cri.go:89] found id: ""
	I1001 20:23:38.356884   65592 logs.go:276] 0 containers: []
	W1001 20:23:38.356895   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:23:38.356906   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:23:38.356919   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:23:38.410643   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:23:38.410676   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:23:38.424302   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:23:38.424331   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:23:38.493094   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:23:38.493125   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:23:38.493140   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:23:38.571900   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:23:38.571934   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:23:41.111972   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:23:41.130472   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:23:41.130549   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:23:41.187056   65592 cri.go:89] found id: ""
	I1001 20:23:41.187087   65592 logs.go:276] 0 containers: []
	W1001 20:23:41.187096   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:23:41.187102   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:23:41.187178   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:23:41.233144   65592 cri.go:89] found id: ""
	I1001 20:23:41.233176   65592 logs.go:276] 0 containers: []
	W1001 20:23:41.233184   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:23:41.233190   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:23:41.233252   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:23:41.274364   65592 cri.go:89] found id: ""
	I1001 20:23:41.274400   65592 logs.go:276] 0 containers: []
	W1001 20:23:41.274418   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:23:41.274425   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:23:41.274478   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:23:41.307558   65592 cri.go:89] found id: ""
	I1001 20:23:41.307590   65592 logs.go:276] 0 containers: []
	W1001 20:23:41.307601   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:23:41.307608   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:23:41.307666   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:23:41.343834   65592 cri.go:89] found id: ""
	I1001 20:23:41.343866   65592 logs.go:276] 0 containers: []
	W1001 20:23:41.343877   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:23:41.343885   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:23:41.343943   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:23:41.378660   65592 cri.go:89] found id: ""
	I1001 20:23:41.378695   65592 logs.go:276] 0 containers: []
	W1001 20:23:41.378706   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:23:41.378713   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:23:41.378779   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:23:41.411607   65592 cri.go:89] found id: ""
	I1001 20:23:41.411642   65592 logs.go:276] 0 containers: []
	W1001 20:23:41.411655   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:23:41.411663   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:23:41.411730   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:23:41.445598   65592 cri.go:89] found id: ""
	I1001 20:23:41.445633   65592 logs.go:276] 0 containers: []
	W1001 20:23:41.445643   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:23:41.445653   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:23:41.445679   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:23:41.518163   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:23:41.518194   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:23:41.518214   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:23:41.597436   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:23:41.597477   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:23:41.635347   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:23:41.635383   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:23:41.687941   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:23:41.687979   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:23:44.202504   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:23:44.215258   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:23:44.215351   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:23:44.249783   65592 cri.go:89] found id: ""
	I1001 20:23:44.249809   65592 logs.go:276] 0 containers: []
	W1001 20:23:44.249817   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:23:44.249824   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:23:44.249878   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:23:44.283266   65592 cri.go:89] found id: ""
	I1001 20:23:44.283291   65592 logs.go:276] 0 containers: []
	W1001 20:23:44.283300   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:23:44.283307   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:23:44.283371   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:23:44.317592   65592 cri.go:89] found id: ""
	I1001 20:23:44.317620   65592 logs.go:276] 0 containers: []
	W1001 20:23:44.317628   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:23:44.317633   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:23:44.317684   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:23:44.350856   65592 cri.go:89] found id: ""
	I1001 20:23:44.350883   65592 logs.go:276] 0 containers: []
	W1001 20:23:44.350891   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:23:44.350896   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:23:44.350945   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:23:44.388317   65592 cri.go:89] found id: ""
	I1001 20:23:44.388350   65592 logs.go:276] 0 containers: []
	W1001 20:23:44.388378   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:23:44.388386   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:23:44.388447   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:23:44.422283   65592 cri.go:89] found id: ""
	I1001 20:23:44.422315   65592 logs.go:276] 0 containers: []
	W1001 20:23:44.422329   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:23:44.422338   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:23:44.422407   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:23:44.455177   65592 cri.go:89] found id: ""
	I1001 20:23:44.455207   65592 logs.go:276] 0 containers: []
	W1001 20:23:44.455217   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:23:44.455225   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:23:44.455284   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:23:44.491305   65592 cri.go:89] found id: ""
	I1001 20:23:44.491337   65592 logs.go:276] 0 containers: []
	W1001 20:23:44.491346   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:23:44.491367   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:23:44.491381   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:23:44.543053   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:23:44.543091   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:23:44.556913   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:23:44.556941   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:23:44.633812   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:23:44.633841   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:23:44.633857   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:23:44.713469   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:23:44.713507   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:23:47.251339   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:23:47.266356   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:23:47.266419   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:23:47.300446   65592 cri.go:89] found id: ""
	I1001 20:23:47.300482   65592 logs.go:276] 0 containers: []
	W1001 20:23:47.300491   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:23:47.300496   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:23:47.300543   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:23:47.334231   65592 cri.go:89] found id: ""
	I1001 20:23:47.334262   65592 logs.go:276] 0 containers: []
	W1001 20:23:47.334273   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:23:47.334280   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:23:47.334345   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:23:47.367250   65592 cri.go:89] found id: ""
	I1001 20:23:47.367281   65592 logs.go:276] 0 containers: []
	W1001 20:23:47.367292   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:23:47.367301   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:23:47.367373   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:23:47.399895   65592 cri.go:89] found id: ""
	I1001 20:23:47.399923   65592 logs.go:276] 0 containers: []
	W1001 20:23:47.399932   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:23:47.399938   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:23:47.399998   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:23:47.435298   65592 cri.go:89] found id: ""
	I1001 20:23:47.435329   65592 logs.go:276] 0 containers: []
	W1001 20:23:47.435338   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:23:47.435347   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:23:47.435422   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:23:47.467147   65592 cri.go:89] found id: ""
	I1001 20:23:47.467173   65592 logs.go:276] 0 containers: []
	W1001 20:23:47.467180   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:23:47.467186   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:23:47.467241   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:23:47.504472   65592 cri.go:89] found id: ""
	I1001 20:23:47.504495   65592 logs.go:276] 0 containers: []
	W1001 20:23:47.504502   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:23:47.504508   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:23:47.504568   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:23:47.542539   65592 cri.go:89] found id: ""
	I1001 20:23:47.542564   65592 logs.go:276] 0 containers: []
	W1001 20:23:47.542571   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:23:47.542580   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:23:47.542594   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:23:47.592423   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:23:47.592459   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:23:47.605447   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:23:47.605485   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:23:47.672394   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:23:47.672433   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:23:47.672459   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:23:47.746800   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:23:47.746836   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:23:50.287274   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:23:50.300449   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:23:50.300526   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:23:50.339277   65592 cri.go:89] found id: ""
	I1001 20:23:50.339309   65592 logs.go:276] 0 containers: []
	W1001 20:23:50.339321   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:23:50.339330   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:23:50.339391   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:23:50.378184   65592 cri.go:89] found id: ""
	I1001 20:23:50.378216   65592 logs.go:276] 0 containers: []
	W1001 20:23:50.378228   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:23:50.378236   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:23:50.378296   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:23:50.415319   65592 cri.go:89] found id: ""
	I1001 20:23:50.415355   65592 logs.go:276] 0 containers: []
	W1001 20:23:50.415364   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:23:50.415370   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:23:50.415430   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:23:50.451161   65592 cri.go:89] found id: ""
	I1001 20:23:50.451192   65592 logs.go:276] 0 containers: []
	W1001 20:23:50.451205   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:23:50.451212   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:23:50.451272   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:23:50.486162   65592 cri.go:89] found id: ""
	I1001 20:23:50.486190   65592 logs.go:276] 0 containers: []
	W1001 20:23:50.486198   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:23:50.486203   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:23:50.486264   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:23:50.518345   65592 cri.go:89] found id: ""
	I1001 20:23:50.518373   65592 logs.go:276] 0 containers: []
	W1001 20:23:50.518384   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:23:50.518390   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:23:50.518457   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:23:50.551040   65592 cri.go:89] found id: ""
	I1001 20:23:50.551067   65592 logs.go:276] 0 containers: []
	W1001 20:23:50.551074   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:23:50.551080   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:23:50.551125   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:23:50.583219   65592 cri.go:89] found id: ""
	I1001 20:23:50.583246   65592 logs.go:276] 0 containers: []
	W1001 20:23:50.583254   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:23:50.583263   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:23:50.583275   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:23:50.659532   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:23:50.659573   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:23:50.698041   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:23:50.698073   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:23:50.750657   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:23:50.750692   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:23:50.765143   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:23:50.765174   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:23:50.834689   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:23:53.334843   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:23:53.347471   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:23:53.347563   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:23:53.379378   65592 cri.go:89] found id: ""
	I1001 20:23:53.379403   65592 logs.go:276] 0 containers: []
	W1001 20:23:53.379411   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:23:53.379417   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:23:53.379462   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:23:53.412278   65592 cri.go:89] found id: ""
	I1001 20:23:53.412306   65592 logs.go:276] 0 containers: []
	W1001 20:23:53.412314   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:23:53.412319   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:23:53.412401   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:23:53.444780   65592 cri.go:89] found id: ""
	I1001 20:23:53.444805   65592 logs.go:276] 0 containers: []
	W1001 20:23:53.444818   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:23:53.444826   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:23:53.444891   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:23:53.479143   65592 cri.go:89] found id: ""
	I1001 20:23:53.479176   65592 logs.go:276] 0 containers: []
	W1001 20:23:53.479186   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:23:53.479194   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:23:53.479253   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:23:53.513045   65592 cri.go:89] found id: ""
	I1001 20:23:53.513074   65592 logs.go:276] 0 containers: []
	W1001 20:23:53.513081   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:23:53.513087   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:23:53.513144   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:23:53.547876   65592 cri.go:89] found id: ""
	I1001 20:23:53.547911   65592 logs.go:276] 0 containers: []
	W1001 20:23:53.547920   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:23:53.547926   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:23:53.547980   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:23:53.581679   65592 cri.go:89] found id: ""
	I1001 20:23:53.581718   65592 logs.go:276] 0 containers: []
	W1001 20:23:53.581726   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:23:53.581732   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:23:53.581792   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:23:53.614898   65592 cri.go:89] found id: ""
	I1001 20:23:53.614930   65592 logs.go:276] 0 containers: []
	W1001 20:23:53.614941   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:23:53.614951   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:23:53.614965   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:23:53.669981   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:23:53.670018   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:23:53.685598   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:23:53.685628   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:23:53.753563   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:23:53.753599   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:23:53.753614   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:23:53.845413   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:23:53.845454   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:23:56.386228   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:23:56.399428   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:23:56.399494   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:23:56.432974   65592 cri.go:89] found id: ""
	I1001 20:23:56.433004   65592 logs.go:276] 0 containers: []
	W1001 20:23:56.433015   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:23:56.433023   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:23:56.433072   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:23:56.468619   65592 cri.go:89] found id: ""
	I1001 20:23:56.468647   65592 logs.go:276] 0 containers: []
	W1001 20:23:56.468655   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:23:56.468663   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:23:56.468719   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:23:56.503694   65592 cri.go:89] found id: ""
	I1001 20:23:56.503726   65592 logs.go:276] 0 containers: []
	W1001 20:23:56.503734   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:23:56.503740   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:23:56.503798   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:23:56.540320   65592 cri.go:89] found id: ""
	I1001 20:23:56.540345   65592 logs.go:276] 0 containers: []
	W1001 20:23:56.540353   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:23:56.540376   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:23:56.540434   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:23:56.576354   65592 cri.go:89] found id: ""
	I1001 20:23:56.576433   65592 logs.go:276] 0 containers: []
	W1001 20:23:56.576443   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:23:56.576450   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:23:56.576517   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:23:56.613792   65592 cri.go:89] found id: ""
	I1001 20:23:56.613820   65592 logs.go:276] 0 containers: []
	W1001 20:23:56.613829   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:23:56.613835   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:23:56.613885   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:23:56.647506   65592 cri.go:89] found id: ""
	I1001 20:23:56.647536   65592 logs.go:276] 0 containers: []
	W1001 20:23:56.647544   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:23:56.647550   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:23:56.647608   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:23:56.679135   65592 cri.go:89] found id: ""
	I1001 20:23:56.679161   65592 logs.go:276] 0 containers: []
	W1001 20:23:56.679169   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:23:56.679178   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:23:56.679188   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:23:56.730438   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:23:56.730478   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:23:56.745239   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:23:56.745265   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:23:56.813252   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:23:56.813300   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:23:56.813316   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:23:56.890207   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:23:56.890242   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:23:59.429446   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:23:59.441968   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:23:59.442044   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:23:59.474277   65592 cri.go:89] found id: ""
	I1001 20:23:59.474310   65592 logs.go:276] 0 containers: []
	W1001 20:23:59.474322   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:23:59.474329   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:23:59.474391   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:23:59.507735   65592 cri.go:89] found id: ""
	I1001 20:23:59.507763   65592 logs.go:276] 0 containers: []
	W1001 20:23:59.507771   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:23:59.507777   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:23:59.507820   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:23:59.539852   65592 cri.go:89] found id: ""
	I1001 20:23:59.539880   65592 logs.go:276] 0 containers: []
	W1001 20:23:59.539887   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:23:59.539893   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:23:59.539939   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:23:59.577739   65592 cri.go:89] found id: ""
	I1001 20:23:59.577770   65592 logs.go:276] 0 containers: []
	W1001 20:23:59.577782   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:23:59.577789   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:23:59.577848   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:23:59.608902   65592 cri.go:89] found id: ""
	I1001 20:23:59.608932   65592 logs.go:276] 0 containers: []
	W1001 20:23:59.608942   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:23:59.608947   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:23:59.608997   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:23:59.641378   65592 cri.go:89] found id: ""
	I1001 20:23:59.641401   65592 logs.go:276] 0 containers: []
	W1001 20:23:59.641409   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:23:59.641416   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:23:59.641469   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:23:59.677032   65592 cri.go:89] found id: ""
	I1001 20:23:59.677061   65592 logs.go:276] 0 containers: []
	W1001 20:23:59.677069   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:23:59.677074   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:23:59.677121   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:23:59.709352   65592 cri.go:89] found id: ""
	I1001 20:23:59.709384   65592 logs.go:276] 0 containers: []
	W1001 20:23:59.709395   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:23:59.709408   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:23:59.709419   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:23:59.746272   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:23:59.746299   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:23:59.796391   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:23:59.796430   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:23:59.811038   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:23:59.811071   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:23:59.884729   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:23:59.884752   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:23:59.884767   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:02.458310   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:02.470513   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:02.470580   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:02.502909   65592 cri.go:89] found id: ""
	I1001 20:24:02.502936   65592 logs.go:276] 0 containers: []
	W1001 20:24:02.502943   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:02.502949   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:02.503004   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:02.539385   65592 cri.go:89] found id: ""
	I1001 20:24:02.539418   65592 logs.go:276] 0 containers: []
	W1001 20:24:02.539429   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:02.539435   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:02.539481   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:02.576422   65592 cri.go:89] found id: ""
	I1001 20:24:02.576454   65592 logs.go:276] 0 containers: []
	W1001 20:24:02.576464   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:02.576470   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:02.576519   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:02.610476   65592 cri.go:89] found id: ""
	I1001 20:24:02.610504   65592 logs.go:276] 0 containers: []
	W1001 20:24:02.610512   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:02.610518   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:02.610566   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:02.643763   65592 cri.go:89] found id: ""
	I1001 20:24:02.643791   65592 logs.go:276] 0 containers: []
	W1001 20:24:02.643799   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:02.643805   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:02.643899   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:02.676600   65592 cri.go:89] found id: ""
	I1001 20:24:02.676625   65592 logs.go:276] 0 containers: []
	W1001 20:24:02.676634   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:02.676640   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:02.676694   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:02.713188   65592 cri.go:89] found id: ""
	I1001 20:24:02.713213   65592 logs.go:276] 0 containers: []
	W1001 20:24:02.713223   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:02.713230   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:02.713302   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:02.745582   65592 cri.go:89] found id: ""
	I1001 20:24:02.745613   65592 logs.go:276] 0 containers: []
	W1001 20:24:02.745625   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:02.745636   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:02.745648   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:02.822367   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:02.822403   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:02.859680   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:02.859714   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:02.910065   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:02.910102   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:02.922803   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:02.922833   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:02.991596   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:05.491984   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:05.504554   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:05.504615   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:05.535191   65592 cri.go:89] found id: ""
	I1001 20:24:05.535224   65592 logs.go:276] 0 containers: []
	W1001 20:24:05.535233   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:05.535239   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:05.535325   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:05.573561   65592 cri.go:89] found id: ""
	I1001 20:24:05.573589   65592 logs.go:276] 0 containers: []
	W1001 20:24:05.573598   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:05.573603   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:05.573651   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:05.606116   65592 cri.go:89] found id: ""
	I1001 20:24:05.606150   65592 logs.go:276] 0 containers: []
	W1001 20:24:05.606161   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:05.606169   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:05.606228   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:05.638901   65592 cri.go:89] found id: ""
	I1001 20:24:05.638930   65592 logs.go:276] 0 containers: []
	W1001 20:24:05.638938   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:05.638943   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:05.638991   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:05.670733   65592 cri.go:89] found id: ""
	I1001 20:24:05.670760   65592 logs.go:276] 0 containers: []
	W1001 20:24:05.670769   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:05.670775   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:05.670837   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:05.707320   65592 cri.go:89] found id: ""
	I1001 20:24:05.707347   65592 logs.go:276] 0 containers: []
	W1001 20:24:05.707355   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:05.707361   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:05.707407   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:05.743905   65592 cri.go:89] found id: ""
	I1001 20:24:05.743936   65592 logs.go:276] 0 containers: []
	W1001 20:24:05.743947   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:05.743954   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:05.744011   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:05.780688   65592 cri.go:89] found id: ""
	I1001 20:24:05.780711   65592 logs.go:276] 0 containers: []
	W1001 20:24:05.780719   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:05.780727   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:05.780739   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:05.821360   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:05.821390   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:05.870685   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:05.870723   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:05.885480   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:05.885513   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:05.956654   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:05.956691   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:05.956790   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:08.543368   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:08.555647   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:08.555735   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:08.590344   65592 cri.go:89] found id: ""
	I1001 20:24:08.590372   65592 logs.go:276] 0 containers: []
	W1001 20:24:08.590380   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:08.590388   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:08.590444   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:08.622636   65592 cri.go:89] found id: ""
	I1001 20:24:08.622660   65592 logs.go:276] 0 containers: []
	W1001 20:24:08.622668   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:08.622673   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:08.622719   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:08.654296   65592 cri.go:89] found id: ""
	I1001 20:24:08.654325   65592 logs.go:276] 0 containers: []
	W1001 20:24:08.654336   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:08.654344   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:08.654395   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:08.687000   65592 cri.go:89] found id: ""
	I1001 20:24:08.687030   65592 logs.go:276] 0 containers: []
	W1001 20:24:08.687039   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:08.687046   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:08.687098   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:08.719613   65592 cri.go:89] found id: ""
	I1001 20:24:08.719643   65592 logs.go:276] 0 containers: []
	W1001 20:24:08.719653   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:08.719660   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:08.719785   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:08.755319   65592 cri.go:89] found id: ""
	I1001 20:24:08.755343   65592 logs.go:276] 0 containers: []
	W1001 20:24:08.755351   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:08.755356   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:08.755403   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:08.792717   65592 cri.go:89] found id: ""
	I1001 20:24:08.792740   65592 logs.go:276] 0 containers: []
	W1001 20:24:08.792748   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:08.792753   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:08.792799   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:08.827123   65592 cri.go:89] found id: ""
	I1001 20:24:08.827153   65592 logs.go:276] 0 containers: []
	W1001 20:24:08.827166   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:08.827173   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:08.827184   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:08.867433   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:08.867465   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:08.916944   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:08.916979   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:08.929816   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:08.929844   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:08.999732   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:08.999766   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:08.999785   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:11.577180   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:11.589632   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:11.589711   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:11.621975   65592 cri.go:89] found id: ""
	I1001 20:24:11.621999   65592 logs.go:276] 0 containers: []
	W1001 20:24:11.622007   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:11.622018   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:11.622082   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:11.654630   65592 cri.go:89] found id: ""
	I1001 20:24:11.654657   65592 logs.go:276] 0 containers: []
	W1001 20:24:11.654664   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:11.654669   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:11.654716   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:11.686813   65592 cri.go:89] found id: ""
	I1001 20:24:11.686839   65592 logs.go:276] 0 containers: []
	W1001 20:24:11.686846   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:11.686859   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:11.686912   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:11.720071   65592 cri.go:89] found id: ""
	I1001 20:24:11.720106   65592 logs.go:276] 0 containers: []
	W1001 20:24:11.720117   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:11.720124   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:11.720188   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:11.753923   65592 cri.go:89] found id: ""
	I1001 20:24:11.753956   65592 logs.go:276] 0 containers: []
	W1001 20:24:11.753967   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:11.753975   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:11.754026   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:11.786834   65592 cri.go:89] found id: ""
	I1001 20:24:11.786863   65592 logs.go:276] 0 containers: []
	W1001 20:24:11.786871   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:11.786877   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:11.786953   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:11.819375   65592 cri.go:89] found id: ""
	I1001 20:24:11.819405   65592 logs.go:276] 0 containers: []
	W1001 20:24:11.819414   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:11.819434   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:11.819495   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:11.853478   65592 cri.go:89] found id: ""
	I1001 20:24:11.853506   65592 logs.go:276] 0 containers: []
	W1001 20:24:11.853513   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:11.853522   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:11.853533   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:11.905150   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:11.905187   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:11.918518   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:11.918550   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:11.982485   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:11.982515   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:11.982530   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:12.062775   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:12.062811   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:14.604500   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:14.617902   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:14.617989   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:14.650415   65592 cri.go:89] found id: ""
	I1001 20:24:14.650456   65592 logs.go:276] 0 containers: []
	W1001 20:24:14.650468   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:14.650477   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:14.650542   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:14.685909   65592 cri.go:89] found id: ""
	I1001 20:24:14.685936   65592 logs.go:276] 0 containers: []
	W1001 20:24:14.685944   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:14.685950   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:14.686002   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:14.721147   65592 cri.go:89] found id: ""
	I1001 20:24:14.721173   65592 logs.go:276] 0 containers: []
	W1001 20:24:14.721185   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:14.721192   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:14.721254   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:14.755789   65592 cri.go:89] found id: ""
	I1001 20:24:14.755820   65592 logs.go:276] 0 containers: []
	W1001 20:24:14.755831   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:14.755838   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:14.755907   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:14.788283   65592 cri.go:89] found id: ""
	I1001 20:24:14.788320   65592 logs.go:276] 0 containers: []
	W1001 20:24:14.788330   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:14.788338   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:14.788423   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:14.826418   65592 cri.go:89] found id: ""
	I1001 20:24:14.826448   65592 logs.go:276] 0 containers: []
	W1001 20:24:14.826459   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:14.826466   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:14.826534   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:14.860785   65592 cri.go:89] found id: ""
	I1001 20:24:14.860818   65592 logs.go:276] 0 containers: []
	W1001 20:24:14.860829   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:14.860840   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:14.860908   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:14.896551   65592 cri.go:89] found id: ""
	I1001 20:24:14.896584   65592 logs.go:276] 0 containers: []
	W1001 20:24:14.896593   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:14.896601   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:14.896611   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:14.951717   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:14.951758   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:14.965078   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:14.965109   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:15.041178   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:15.041201   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:15.041217   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:15.118081   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:15.118122   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:17.657430   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:17.670144   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:17.670229   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:17.713890   65592 cri.go:89] found id: ""
	I1001 20:24:17.713920   65592 logs.go:276] 0 containers: []
	W1001 20:24:17.713931   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:17.713949   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:17.714010   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:17.746507   65592 cri.go:89] found id: ""
	I1001 20:24:17.746537   65592 logs.go:276] 0 containers: []
	W1001 20:24:17.746545   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:17.746550   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:17.746607   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:17.780492   65592 cri.go:89] found id: ""
	I1001 20:24:17.780529   65592 logs.go:276] 0 containers: []
	W1001 20:24:17.780540   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:17.780546   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:17.780605   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:17.814242   65592 cri.go:89] found id: ""
	I1001 20:24:17.814270   65592 logs.go:276] 0 containers: []
	W1001 20:24:17.814278   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:17.814284   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:17.814333   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:17.846510   65592 cri.go:89] found id: ""
	I1001 20:24:17.846542   65592 logs.go:276] 0 containers: []
	W1001 20:24:17.846554   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:17.846561   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:17.846623   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:17.879534   65592 cri.go:89] found id: ""
	I1001 20:24:17.879562   65592 logs.go:276] 0 containers: []
	W1001 20:24:17.879572   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:17.879579   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:17.879644   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:17.912996   65592 cri.go:89] found id: ""
	I1001 20:24:17.913022   65592 logs.go:276] 0 containers: []
	W1001 20:24:17.913029   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:17.913035   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:17.913080   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:17.948472   65592 cri.go:89] found id: ""
	I1001 20:24:17.948502   65592 logs.go:276] 0 containers: []
	W1001 20:24:17.948513   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:17.948524   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:17.948543   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:17.961363   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:17.961401   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:18.028534   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:18.028569   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:18.028599   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:18.105892   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:18.105928   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:18.149756   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:18.149795   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:20.701485   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:20.722434   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:20.722500   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:20.764649   65592 cri.go:89] found id: ""
	I1001 20:24:20.764678   65592 logs.go:276] 0 containers: []
	W1001 20:24:20.764688   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:20.764694   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:20.764759   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:20.799227   65592 cri.go:89] found id: ""
	I1001 20:24:20.799256   65592 logs.go:276] 0 containers: []
	W1001 20:24:20.799267   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:20.799274   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:20.799325   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:20.834425   65592 cri.go:89] found id: ""
	I1001 20:24:20.834463   65592 logs.go:276] 0 containers: []
	W1001 20:24:20.834474   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:20.834482   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:20.834550   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:20.870516   65592 cri.go:89] found id: ""
	I1001 20:24:20.870548   65592 logs.go:276] 0 containers: []
	W1001 20:24:20.870562   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:20.870568   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:20.870625   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:20.912077   65592 cri.go:89] found id: ""
	I1001 20:24:20.912107   65592 logs.go:276] 0 containers: []
	W1001 20:24:20.912119   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:20.912126   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:20.912183   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:20.948488   65592 cri.go:89] found id: ""
	I1001 20:24:20.948519   65592 logs.go:276] 0 containers: []
	W1001 20:24:20.948531   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:20.948539   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:20.948598   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:20.982969   65592 cri.go:89] found id: ""
	I1001 20:24:20.983000   65592 logs.go:276] 0 containers: []
	W1001 20:24:20.983011   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:20.983019   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:20.983080   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:21.015936   65592 cri.go:89] found id: ""
	I1001 20:24:21.015967   65592 logs.go:276] 0 containers: []
	W1001 20:24:21.015979   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:21.015990   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:21.016004   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:21.064225   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:21.064260   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:21.077087   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:21.077114   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:21.147797   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:21.147829   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:21.147844   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:21.227504   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:21.227536   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:23.768539   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:23.781751   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:23.781814   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:23.817520   65592 cri.go:89] found id: ""
	I1001 20:24:23.817545   65592 logs.go:276] 0 containers: []
	W1001 20:24:23.817555   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:23.817563   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:23.817631   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:23.850525   65592 cri.go:89] found id: ""
	I1001 20:24:23.850561   65592 logs.go:276] 0 containers: []
	W1001 20:24:23.850573   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:23.850579   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:23.850630   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:23.885500   65592 cri.go:89] found id: ""
	I1001 20:24:23.885535   65592 logs.go:276] 0 containers: []
	W1001 20:24:23.885545   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:23.885551   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:23.885600   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:23.919976   65592 cri.go:89] found id: ""
	I1001 20:24:23.920009   65592 logs.go:276] 0 containers: []
	W1001 20:24:23.920021   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:23.920028   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:23.920087   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:23.954361   65592 cri.go:89] found id: ""
	I1001 20:24:23.954395   65592 logs.go:276] 0 containers: []
	W1001 20:24:23.954407   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:23.954414   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:23.954476   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:23.987467   65592 cri.go:89] found id: ""
	I1001 20:24:23.987514   65592 logs.go:276] 0 containers: []
	W1001 20:24:23.987523   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:23.987529   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:23.987579   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:24.020866   65592 cri.go:89] found id: ""
	I1001 20:24:24.020898   65592 logs.go:276] 0 containers: []
	W1001 20:24:24.020909   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:24.020916   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:24.020982   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:24.054877   65592 cri.go:89] found id: ""
	I1001 20:24:24.054905   65592 logs.go:276] 0 containers: []
	W1001 20:24:24.054913   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:24.054921   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:24.054931   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:24.133217   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:24.133259   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:24.173943   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:24.173980   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:24.224980   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:24.225018   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:24.239998   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:24.240039   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:24.309244   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:26.809587   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:26.822828   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:26.822903   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:26.857162   65592 cri.go:89] found id: ""
	I1001 20:24:26.857186   65592 logs.go:276] 0 containers: []
	W1001 20:24:26.857194   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:26.857200   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:26.857249   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:26.893026   65592 cri.go:89] found id: ""
	I1001 20:24:26.893055   65592 logs.go:276] 0 containers: []
	W1001 20:24:26.893066   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:26.893074   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:26.893138   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:26.931407   65592 cri.go:89] found id: ""
	I1001 20:24:26.931433   65592 logs.go:276] 0 containers: []
	W1001 20:24:26.931463   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:26.931470   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:26.931518   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:26.968279   65592 cri.go:89] found id: ""
	I1001 20:24:26.968304   65592 logs.go:276] 0 containers: []
	W1001 20:24:26.968311   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:26.968317   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:26.968391   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:27.001091   65592 cri.go:89] found id: ""
	I1001 20:24:27.001123   65592 logs.go:276] 0 containers: []
	W1001 20:24:27.001132   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:27.001138   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:27.001193   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:27.037553   65592 cri.go:89] found id: ""
	I1001 20:24:27.037580   65592 logs.go:276] 0 containers: []
	W1001 20:24:27.037588   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:27.037594   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:27.037652   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:27.071699   65592 cri.go:89] found id: ""
	I1001 20:24:27.071735   65592 logs.go:276] 0 containers: []
	W1001 20:24:27.071746   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:27.071753   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:27.071811   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:27.108033   65592 cri.go:89] found id: ""
	I1001 20:24:27.108064   65592 logs.go:276] 0 containers: []
	W1001 20:24:27.108077   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:27.108088   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:27.108104   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:27.193838   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:27.193877   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:27.249070   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:27.249101   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:27.302349   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:27.302387   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:27.316440   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:27.316470   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:27.380173   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:29.881076   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:29.894018   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:29.894104   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:29.926964   65592 cri.go:89] found id: ""
	I1001 20:24:29.926991   65592 logs.go:276] 0 containers: []
	W1001 20:24:29.927000   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:29.927006   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:29.927073   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:29.959523   65592 cri.go:89] found id: ""
	I1001 20:24:29.959553   65592 logs.go:276] 0 containers: []
	W1001 20:24:29.959561   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:29.959567   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:29.959612   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:29.992491   65592 cri.go:89] found id: ""
	I1001 20:24:29.992524   65592 logs.go:276] 0 containers: []
	W1001 20:24:29.992535   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:29.992542   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:29.992602   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:30.027653   65592 cri.go:89] found id: ""
	I1001 20:24:30.027701   65592 logs.go:276] 0 containers: []
	W1001 20:24:30.027710   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:30.027716   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:30.027768   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:30.061234   65592 cri.go:89] found id: ""
	I1001 20:24:30.061271   65592 logs.go:276] 0 containers: []
	W1001 20:24:30.061282   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:30.061289   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:30.061349   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:30.094941   65592 cri.go:89] found id: ""
	I1001 20:24:30.094969   65592 logs.go:276] 0 containers: []
	W1001 20:24:30.094980   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:30.094988   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:30.095081   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:30.130393   65592 cri.go:89] found id: ""
	I1001 20:24:30.130419   65592 logs.go:276] 0 containers: []
	W1001 20:24:30.130430   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:30.130437   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:30.130503   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:30.166135   65592 cri.go:89] found id: ""
	I1001 20:24:30.166165   65592 logs.go:276] 0 containers: []
	W1001 20:24:30.166175   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:30.166186   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:30.166199   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:30.236004   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:30.236030   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:30.236043   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:30.316288   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:30.316328   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:30.353634   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:30.353660   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:30.405496   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:30.405535   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:32.919472   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:32.938298   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:32.938358   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:32.983055   65592 cri.go:89] found id: ""
	I1001 20:24:32.983086   65592 logs.go:276] 0 containers: []
	W1001 20:24:32.983095   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:32.983101   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:32.983161   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:33.017472   65592 cri.go:89] found id: ""
	I1001 20:24:33.017501   65592 logs.go:276] 0 containers: []
	W1001 20:24:33.017513   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:33.017519   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:33.017582   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:33.052068   65592 cri.go:89] found id: ""
	I1001 20:24:33.052102   65592 logs.go:276] 0 containers: []
	W1001 20:24:33.052113   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:33.052120   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:33.052170   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:33.086993   65592 cri.go:89] found id: ""
	I1001 20:24:33.087025   65592 logs.go:276] 0 containers: []
	W1001 20:24:33.087037   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:33.087044   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:33.087095   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:33.122014   65592 cri.go:89] found id: ""
	I1001 20:24:33.122045   65592 logs.go:276] 0 containers: []
	W1001 20:24:33.122065   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:33.122073   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:33.122124   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:33.154507   65592 cri.go:89] found id: ""
	I1001 20:24:33.154537   65592 logs.go:276] 0 containers: []
	W1001 20:24:33.154548   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:33.154585   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:33.154652   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:33.188128   65592 cri.go:89] found id: ""
	I1001 20:24:33.188154   65592 logs.go:276] 0 containers: []
	W1001 20:24:33.188165   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:33.188173   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:33.188231   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:33.221238   65592 cri.go:89] found id: ""
	I1001 20:24:33.221266   65592 logs.go:276] 0 containers: []
	W1001 20:24:33.221277   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:33.221288   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:33.221303   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:33.302525   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:33.302571   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:33.347570   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:33.347609   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:33.398478   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:33.398515   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:33.412224   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:33.412256   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:33.478068   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:35.979030   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:35.991482   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:35.991569   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:36.025040   65592 cri.go:89] found id: ""
	I1001 20:24:36.025074   65592 logs.go:276] 0 containers: []
	W1001 20:24:36.025089   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:36.025096   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:36.025147   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:36.063261   65592 cri.go:89] found id: ""
	I1001 20:24:36.063296   65592 logs.go:276] 0 containers: []
	W1001 20:24:36.063307   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:36.063315   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:36.063380   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:36.098347   65592 cri.go:89] found id: ""
	I1001 20:24:36.098373   65592 logs.go:276] 0 containers: []
	W1001 20:24:36.098381   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:36.098387   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:36.098436   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:36.139143   65592 cri.go:89] found id: ""
	I1001 20:24:36.139172   65592 logs.go:276] 0 containers: []
	W1001 20:24:36.139184   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:36.139192   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:36.139253   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:36.171349   65592 cri.go:89] found id: ""
	I1001 20:24:36.171383   65592 logs.go:276] 0 containers: []
	W1001 20:24:36.171396   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:36.171402   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:36.171482   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:36.208093   65592 cri.go:89] found id: ""
	I1001 20:24:36.208124   65592 logs.go:276] 0 containers: []
	W1001 20:24:36.208135   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:36.208143   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:36.208212   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:36.242330   65592 cri.go:89] found id: ""
	I1001 20:24:36.242360   65592 logs.go:276] 0 containers: []
	W1001 20:24:36.242371   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:36.242377   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:36.242446   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:36.276469   65592 cri.go:89] found id: ""
	I1001 20:24:36.276500   65592 logs.go:276] 0 containers: []
	W1001 20:24:36.276511   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:36.276521   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:36.276536   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:36.290316   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:36.290346   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:36.358587   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:36.358610   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:36.358622   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:36.431890   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:36.431934   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:36.472974   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:36.473003   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:39.025570   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:39.040932   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:39.041011   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:39.076620   65592 cri.go:89] found id: ""
	I1001 20:24:39.076649   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.076659   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:39.076666   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:39.076734   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:39.113395   65592 cri.go:89] found id: ""
	I1001 20:24:39.113422   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.113430   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:39.113436   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:39.113490   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:39.147839   65592 cri.go:89] found id: ""
	I1001 20:24:39.147877   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.147890   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:39.147899   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:39.147966   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:39.179721   65592 cri.go:89] found id: ""
	I1001 20:24:39.179758   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.179769   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:39.179777   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:39.179842   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:39.211511   65592 cri.go:89] found id: ""
	I1001 20:24:39.211541   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.211549   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:39.211554   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:39.211603   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:39.243517   65592 cri.go:89] found id: ""
	I1001 20:24:39.243544   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.243552   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:39.243557   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:39.243623   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:39.276159   65592 cri.go:89] found id: ""
	I1001 20:24:39.276182   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.276189   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:39.276195   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:39.276239   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:39.307242   65592 cri.go:89] found id: ""
	I1001 20:24:39.307274   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.307285   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:39.307295   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:39.307307   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:39.387442   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:39.387486   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:39.423123   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:39.423156   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:39.474648   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:39.474686   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:39.488129   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:39.488158   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:39.557478   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:42.058114   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:42.071979   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:42.072056   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:42.110529   65592 cri.go:89] found id: ""
	I1001 20:24:42.110557   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.110565   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:42.110570   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:42.110619   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:42.145408   65592 cri.go:89] found id: ""
	I1001 20:24:42.145436   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.145445   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:42.145450   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:42.145509   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:42.180602   65592 cri.go:89] found id: ""
	I1001 20:24:42.180641   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.180655   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:42.180664   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:42.180722   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:42.214116   65592 cri.go:89] found id: ""
	I1001 20:24:42.214148   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.214160   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:42.214168   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:42.214224   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:42.246785   65592 cri.go:89] found id: ""
	I1001 20:24:42.246814   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.246825   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:42.246832   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:42.246900   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:42.281586   65592 cri.go:89] found id: ""
	I1001 20:24:42.281633   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.281645   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:42.281660   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:42.281724   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:42.318982   65592 cri.go:89] found id: ""
	I1001 20:24:42.319015   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.319025   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:42.319032   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:42.319085   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:42.350592   65592 cri.go:89] found id: ""
	I1001 20:24:42.350619   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.350638   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:42.350646   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:42.350659   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:42.429111   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:42.429152   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:42.466741   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:42.466775   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:42.516829   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:42.516870   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:42.530174   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:42.530201   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:42.600444   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:45.101469   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:45.113821   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:45.113904   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:45.148105   65592 cri.go:89] found id: ""
	I1001 20:24:45.148132   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.148146   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:45.148152   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:45.148196   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:45.180980   65592 cri.go:89] found id: ""
	I1001 20:24:45.181012   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.181027   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:45.181046   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:45.181113   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:45.216971   65592 cri.go:89] found id: ""
	I1001 20:24:45.217001   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.217010   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:45.217015   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:45.217060   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:45.252240   65592 cri.go:89] found id: ""
	I1001 20:24:45.252275   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.252287   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:45.252294   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:45.252354   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:45.287389   65592 cri.go:89] found id: ""
	I1001 20:24:45.287419   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.287434   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:45.287440   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:45.287501   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:45.319980   65592 cri.go:89] found id: ""
	I1001 20:24:45.320015   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.320027   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:45.320035   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:45.320101   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:45.351894   65592 cri.go:89] found id: ""
	I1001 20:24:45.351920   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.351931   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:45.351936   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:45.351984   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:45.385370   65592 cri.go:89] found id: ""
	I1001 20:24:45.385400   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.385412   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:45.385423   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:45.385485   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:45.449558   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:45.449584   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:45.449596   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:45.524322   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:45.524372   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:45.560729   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:45.560757   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:45.614098   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:45.614139   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:48.129944   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:48.143420   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:48.143496   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:48.175627   65592 cri.go:89] found id: ""
	I1001 20:24:48.175668   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.175682   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:48.175689   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:48.175747   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:48.210422   65592 cri.go:89] found id: ""
	I1001 20:24:48.210451   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.210462   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:48.210470   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:48.210535   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:48.243916   65592 cri.go:89] found id: ""
	I1001 20:24:48.243952   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.243963   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:48.243972   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:48.244027   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:48.275802   65592 cri.go:89] found id: ""
	I1001 20:24:48.275830   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.275845   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:48.275857   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:48.275917   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:48.311539   65592 cri.go:89] found id: ""
	I1001 20:24:48.311569   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.311579   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:48.311586   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:48.311648   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:48.342606   65592 cri.go:89] found id: ""
	I1001 20:24:48.342646   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.342658   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:48.342666   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:48.342718   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:48.375554   65592 cri.go:89] found id: ""
	I1001 20:24:48.375581   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.375591   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:48.375597   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:48.375642   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:48.407747   65592 cri.go:89] found id: ""
	I1001 20:24:48.407776   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.407789   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:48.407800   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:48.407814   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:48.457470   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:48.457503   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:48.470483   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:48.470517   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:48.533536   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:48.533565   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:48.533580   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:48.614530   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:48.614571   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:51.157091   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:51.170292   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:51.170364   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:51.203784   65592 cri.go:89] found id: ""
	I1001 20:24:51.203809   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.203822   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:51.203828   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:51.203917   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:51.239789   65592 cri.go:89] found id: ""
	I1001 20:24:51.239826   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.239834   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:51.239840   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:51.239889   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:51.274562   65592 cri.go:89] found id: ""
	I1001 20:24:51.274595   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.274607   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:51.274617   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:51.274701   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:51.306172   65592 cri.go:89] found id: ""
	I1001 20:24:51.306199   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.306207   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:51.306213   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:51.306269   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:51.339631   65592 cri.go:89] found id: ""
	I1001 20:24:51.339660   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.339668   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:51.339674   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:51.339725   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:51.372128   65592 cri.go:89] found id: ""
	I1001 20:24:51.372154   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.372163   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:51.372169   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:51.372223   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:51.403790   65592 cri.go:89] found id: ""
	I1001 20:24:51.403818   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.403828   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:51.403842   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:51.403890   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:51.437771   65592 cri.go:89] found id: ""
	I1001 20:24:51.437799   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.437808   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:51.437816   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:51.437827   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:51.489824   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:51.489864   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:51.503478   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:51.503508   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:51.573741   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:51.573768   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:51.573780   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:51.662355   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:51.662391   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:54.199747   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:54.212731   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:54.212797   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:54.244554   65592 cri.go:89] found id: ""
	I1001 20:24:54.244586   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.244596   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:54.244602   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:54.244652   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:54.280636   65592 cri.go:89] found id: ""
	I1001 20:24:54.280667   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.280679   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:54.280686   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:54.280737   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:54.318213   65592 cri.go:89] found id: ""
	I1001 20:24:54.318246   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.318257   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:54.318265   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:54.318321   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:54.353563   65592 cri.go:89] found id: ""
	I1001 20:24:54.353595   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.353606   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:54.353615   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:54.353678   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:54.387770   65592 cri.go:89] found id: ""
	I1001 20:24:54.387795   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.387803   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:54.387809   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:54.387869   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:54.421289   65592 cri.go:89] found id: ""
	I1001 20:24:54.421317   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.421325   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:54.421332   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:54.421382   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:54.456221   65592 cri.go:89] found id: ""
	I1001 20:24:54.456261   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.456274   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:54.456282   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:54.456348   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:54.488174   65592 cri.go:89] found id: ""
	I1001 20:24:54.488208   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.488219   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:54.488228   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:54.488241   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:54.540981   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:54.541020   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:54.554099   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:54.554129   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:54.623978   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:54.624013   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:54.624034   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:54.704703   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:54.704738   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:57.241791   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:57.254771   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:57.254843   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:57.290226   65592 cri.go:89] found id: ""
	I1001 20:24:57.290263   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.290271   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:57.290277   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:57.290336   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:57.324910   65592 cri.go:89] found id: ""
	I1001 20:24:57.324938   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.324946   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:57.324951   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:57.325068   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:57.360553   65592 cri.go:89] found id: ""
	I1001 20:24:57.360586   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.360601   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:57.360608   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:57.360669   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:57.395182   65592 cri.go:89] found id: ""
	I1001 20:24:57.395216   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.395229   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:57.395236   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:57.395296   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:57.428967   65592 cri.go:89] found id: ""
	I1001 20:24:57.428998   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.429011   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:57.429017   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:57.429072   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:57.462483   65592 cri.go:89] found id: ""
	I1001 20:24:57.462511   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.462519   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:57.462525   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:57.462581   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:57.495505   65592 cri.go:89] found id: ""
	I1001 20:24:57.495538   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.495550   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:57.495556   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:57.495615   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:57.528132   65592 cri.go:89] found id: ""
	I1001 20:24:57.528164   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.528176   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:57.528188   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:57.528203   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:57.596557   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:57.596583   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:57.596598   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:57.676797   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:57.676830   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:57.714624   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:57.714653   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:57.763801   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:57.763839   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:00.277808   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:00.291432   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:00.291489   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:00.327524   65592 cri.go:89] found id: ""
	I1001 20:25:00.327554   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.327562   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:00.327568   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:00.327618   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:00.364125   65592 cri.go:89] found id: ""
	I1001 20:25:00.364153   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.364162   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:00.364167   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:00.364229   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:00.404507   65592 cri.go:89] found id: ""
	I1001 20:25:00.404543   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.404555   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:00.404564   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:00.404770   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:00.438761   65592 cri.go:89] found id: ""
	I1001 20:25:00.438792   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.438800   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:00.438807   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:00.438862   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:00.473263   65592 cri.go:89] found id: ""
	I1001 20:25:00.473301   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.473313   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:00.473321   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:00.473391   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:00.510276   65592 cri.go:89] found id: ""
	I1001 20:25:00.510307   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.510317   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:00.510324   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:00.510383   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:00.545118   65592 cri.go:89] found id: ""
	I1001 20:25:00.545149   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.545165   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:00.545173   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:00.545229   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:00.577773   65592 cri.go:89] found id: ""
	I1001 20:25:00.577799   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.577810   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:00.577821   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:00.577835   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:00.628978   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:00.629012   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:00.642192   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:00.642225   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:00.711399   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:00.711432   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:00.711446   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:00.792477   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:00.792514   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:03.332492   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:03.347542   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:03.347622   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:03.388263   65592 cri.go:89] found id: ""
	I1001 20:25:03.388292   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.388300   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:03.388306   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:03.388353   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:03.421489   65592 cri.go:89] found id: ""
	I1001 20:25:03.421525   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.421534   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:03.421539   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:03.421634   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:03.457139   65592 cri.go:89] found id: ""
	I1001 20:25:03.457172   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.457182   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:03.457189   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:03.457251   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:03.497203   65592 cri.go:89] found id: ""
	I1001 20:25:03.497232   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.497241   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:03.497247   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:03.497313   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:03.535137   65592 cri.go:89] found id: ""
	I1001 20:25:03.535163   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.535171   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:03.535176   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:03.535221   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:03.569131   65592 cri.go:89] found id: ""
	I1001 20:25:03.569158   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.569166   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:03.569171   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:03.569217   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:03.605289   65592 cri.go:89] found id: ""
	I1001 20:25:03.605321   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.605329   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:03.605336   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:03.605389   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:03.651086   65592 cri.go:89] found id: ""
	I1001 20:25:03.651115   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.651123   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:03.651134   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:03.651145   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:03.731256   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:03.731281   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:03.731299   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:03.809393   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:03.809442   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:03.849171   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:03.849198   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:03.898009   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:03.898045   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:06.411962   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:06.425432   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:06.425513   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:06.463339   65592 cri.go:89] found id: ""
	I1001 20:25:06.463371   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.463383   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:06.463391   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:06.463455   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:06.502527   65592 cri.go:89] found id: ""
	I1001 20:25:06.502561   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.502569   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:06.502611   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:06.502687   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:06.547428   65592 cri.go:89] found id: ""
	I1001 20:25:06.547465   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.547474   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:06.547480   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:06.547539   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:06.581672   65592 cri.go:89] found id: ""
	I1001 20:25:06.581699   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.581708   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:06.581713   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:06.581769   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:06.615391   65592 cri.go:89] found id: ""
	I1001 20:25:06.615436   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.615449   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:06.615457   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:06.615525   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:06.651019   65592 cri.go:89] found id: ""
	I1001 20:25:06.651050   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.651060   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:06.651067   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:06.651142   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:06.687887   65592 cri.go:89] found id: ""
	I1001 20:25:06.687912   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.687922   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:06.687929   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:06.687982   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:06.729234   65592 cri.go:89] found id: ""
	I1001 20:25:06.729263   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.729273   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:06.729282   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:06.729296   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:06.747295   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:06.747326   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:06.816480   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:06.816511   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:06.816524   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:06.896918   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:06.896957   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:06.938922   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:06.938958   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:09.494252   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:09.508085   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:09.508171   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:09.542999   65592 cri.go:89] found id: ""
	I1001 20:25:09.543029   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.543037   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:09.543043   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:09.543100   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:09.578112   65592 cri.go:89] found id: ""
	I1001 20:25:09.578137   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.578145   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:09.578150   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:09.578199   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:09.613123   65592 cri.go:89] found id: ""
	I1001 20:25:09.613150   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.613158   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:09.613166   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:09.613223   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:09.648172   65592 cri.go:89] found id: ""
	I1001 20:25:09.648214   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.648223   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:09.648230   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:09.648302   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:09.681217   65592 cri.go:89] found id: ""
	I1001 20:25:09.681244   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.681254   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:09.681261   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:09.681320   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:09.718166   65592 cri.go:89] found id: ""
	I1001 20:25:09.718196   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.718204   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:09.718212   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:09.718272   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:09.751910   65592 cri.go:89] found id: ""
	I1001 20:25:09.751942   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.751951   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:09.751956   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:09.752004   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:09.789213   65592 cri.go:89] found id: ""
	I1001 20:25:09.789237   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.789246   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:09.789254   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:09.789265   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:09.826746   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:09.826780   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:09.879079   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:09.879123   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:09.892480   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:09.892507   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:09.967048   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:09.967084   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:09.967103   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:12.545057   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:12.557888   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:12.557969   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:12.594881   65592 cri.go:89] found id: ""
	I1001 20:25:12.594928   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.594942   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:12.594952   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:12.595021   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:12.631393   65592 cri.go:89] found id: ""
	I1001 20:25:12.631425   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.631437   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:12.631445   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:12.631504   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:12.666442   65592 cri.go:89] found id: ""
	I1001 20:25:12.666476   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.666486   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:12.666493   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:12.666548   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:12.703321   65592 cri.go:89] found id: ""
	I1001 20:25:12.703359   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.703371   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:12.703379   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:12.703444   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:12.742188   65592 cri.go:89] found id: ""
	I1001 20:25:12.742216   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.742224   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:12.742230   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:12.742276   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:12.781829   65592 cri.go:89] found id: ""
	I1001 20:25:12.781859   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.781869   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:12.781876   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:12.781940   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:12.815368   65592 cri.go:89] found id: ""
	I1001 20:25:12.815397   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.815405   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:12.815411   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:12.815463   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:12.850913   65592 cri.go:89] found id: ""
	I1001 20:25:12.850941   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.850949   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:12.850958   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:12.850968   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:12.901409   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:12.901443   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:12.914517   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:12.914567   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:12.980086   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:12.980119   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:12.980135   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:13.055950   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:13.055989   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:15.595692   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:15.609648   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:15.609728   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:15.645477   65592 cri.go:89] found id: ""
	I1001 20:25:15.645502   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.645510   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:15.645514   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:15.645558   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:15.679674   65592 cri.go:89] found id: ""
	I1001 20:25:15.679702   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.679711   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:15.679717   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:15.679774   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:15.718057   65592 cri.go:89] found id: ""
	I1001 20:25:15.718082   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.718092   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:15.718097   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:15.718153   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:15.754094   65592 cri.go:89] found id: ""
	I1001 20:25:15.754121   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.754130   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:15.754136   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:15.754189   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:15.790415   65592 cri.go:89] found id: ""
	I1001 20:25:15.790450   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.790464   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:15.790472   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:15.790535   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:15.825603   65592 cri.go:89] found id: ""
	I1001 20:25:15.825630   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.825645   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:15.825653   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:15.825717   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:15.861330   65592 cri.go:89] found id: ""
	I1001 20:25:15.861356   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.861368   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:15.861375   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:15.861451   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:15.897534   65592 cri.go:89] found id: ""
	I1001 20:25:15.897564   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.897575   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:15.897584   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:15.897598   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:15.972842   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:15.972881   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:16.010625   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:16.010653   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:16.062717   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:16.062762   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:16.076538   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:16.076568   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:16.156886   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:18.657436   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:18.673018   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:18.673093   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:18.708040   65592 cri.go:89] found id: ""
	I1001 20:25:18.708078   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.708091   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:18.708100   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:18.708167   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:18.740152   65592 cri.go:89] found id: ""
	I1001 20:25:18.740188   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.740200   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:18.740207   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:18.740264   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:18.778238   65592 cri.go:89] found id: ""
	I1001 20:25:18.778270   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.778279   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:18.778287   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:18.778351   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:18.815450   65592 cri.go:89] found id: ""
	I1001 20:25:18.815489   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.815503   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:18.815512   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:18.815576   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:18.850008   65592 cri.go:89] found id: ""
	I1001 20:25:18.850038   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.850047   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:18.850053   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:18.850104   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:18.890919   65592 cri.go:89] found id: ""
	I1001 20:25:18.890943   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.890951   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:18.890957   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:18.891004   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:18.934196   65592 cri.go:89] found id: ""
	I1001 20:25:18.934228   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.934240   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:18.934247   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:18.934307   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:18.977817   65592 cri.go:89] found id: ""
	I1001 20:25:18.977850   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.977862   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:18.977875   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:18.977889   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:19.039867   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:19.039910   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:19.054277   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:19.054310   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:19.125736   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:19.125765   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:19.125782   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:19.208588   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:19.208622   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:21.750881   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:21.766638   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:21.766712   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:21.801906   65592 cri.go:89] found id: ""
	I1001 20:25:21.801930   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.801938   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:21.801944   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:21.801990   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:21.842801   65592 cri.go:89] found id: ""
	I1001 20:25:21.842830   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.842844   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:21.842852   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:21.842917   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:21.876550   65592 cri.go:89] found id: ""
	I1001 20:25:21.876577   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.876588   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:21.876594   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:21.876647   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:21.910972   65592 cri.go:89] found id: ""
	I1001 20:25:21.911007   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.911016   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:21.911022   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:21.911098   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:21.945721   65592 cri.go:89] found id: ""
	I1001 20:25:21.945753   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.945765   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:21.945773   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:21.945833   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:21.982101   65592 cri.go:89] found id: ""
	I1001 20:25:21.982131   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.982143   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:21.982151   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:21.982242   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:22.016526   65592 cri.go:89] found id: ""
	I1001 20:25:22.016558   65592 logs.go:276] 0 containers: []
	W1001 20:25:22.016569   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:22.016577   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:22.016632   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:22.054792   65592 cri.go:89] found id: ""
	I1001 20:25:22.054822   65592 logs.go:276] 0 containers: []
	W1001 20:25:22.054833   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:22.054844   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:22.054863   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:22.105936   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:22.105974   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:22.120834   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:22.120858   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:22.195177   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:22.195211   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:22.195228   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:22.281244   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:22.281285   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:24.824197   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:24.840967   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:24.841030   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:24.882399   65592 cri.go:89] found id: ""
	I1001 20:25:24.882429   65592 logs.go:276] 0 containers: []
	W1001 20:25:24.882443   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:24.882449   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:24.882497   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:24.935548   65592 cri.go:89] found id: ""
	I1001 20:25:24.935581   65592 logs.go:276] 0 containers: []
	W1001 20:25:24.935590   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:24.935596   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:24.935644   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:24.976931   65592 cri.go:89] found id: ""
	I1001 20:25:24.976958   65592 logs.go:276] 0 containers: []
	W1001 20:25:24.976969   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:24.976976   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:24.977035   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:25.009926   65592 cri.go:89] found id: ""
	I1001 20:25:25.009959   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.009968   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:25.009975   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:25.010039   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:25.043261   65592 cri.go:89] found id: ""
	I1001 20:25:25.043299   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.043310   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:25.043316   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:25.043377   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:25.075177   65592 cri.go:89] found id: ""
	I1001 20:25:25.075205   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.075214   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:25.075221   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:25.075267   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:25.109792   65592 cri.go:89] found id: ""
	I1001 20:25:25.109832   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.109845   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:25.109871   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:25.109942   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:25.148721   65592 cri.go:89] found id: ""
	I1001 20:25:25.148753   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.148763   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:25.148772   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:25.148790   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:25.161802   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:25.161841   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:25.227699   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:25.227732   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:25.227750   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:25.314028   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:25.314075   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:25.354881   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:25.354919   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:27.906936   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:27.920745   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:27.920806   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:27.955399   65592 cri.go:89] found id: ""
	I1001 20:25:27.955426   65592 logs.go:276] 0 containers: []
	W1001 20:25:27.955444   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:27.955450   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:27.955503   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:27.993714   65592 cri.go:89] found id: ""
	I1001 20:25:27.993747   65592 logs.go:276] 0 containers: []
	W1001 20:25:27.993759   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:27.993766   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:27.993827   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:28.028439   65592 cri.go:89] found id: ""
	I1001 20:25:28.028475   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.028487   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:28.028494   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:28.028563   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:28.072935   65592 cri.go:89] found id: ""
	I1001 20:25:28.072966   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.072977   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:28.072985   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:28.073050   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:28.107241   65592 cri.go:89] found id: ""
	I1001 20:25:28.107275   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.107285   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:28.107293   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:28.107357   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:28.141382   65592 cri.go:89] found id: ""
	I1001 20:25:28.141412   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.141423   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:28.141431   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:28.141494   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:28.175749   65592 cri.go:89] found id: ""
	I1001 20:25:28.175782   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.175794   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:28.175801   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:28.175864   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:28.214968   65592 cri.go:89] found id: ""
	I1001 20:25:28.214997   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.215006   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:28.215015   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:28.215027   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:28.259588   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:28.259619   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:28.314439   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:28.314480   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:28.327938   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:28.327967   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:28.399479   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:28.399508   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:28.399523   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:30.978863   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:30.991415   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:30.991493   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:31.026443   65592 cri.go:89] found id: ""
	I1001 20:25:31.026480   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.026494   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:31.026513   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:31.026576   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:31.060635   65592 cri.go:89] found id: ""
	I1001 20:25:31.060663   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.060678   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:31.060684   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:31.060743   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:31.095494   65592 cri.go:89] found id: ""
	I1001 20:25:31.095525   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.095533   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:31.095540   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:31.095587   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:31.130693   65592 cri.go:89] found id: ""
	I1001 20:25:31.130718   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.130728   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:31.130741   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:31.130802   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:31.167928   65592 cri.go:89] found id: ""
	I1001 20:25:31.167960   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.167973   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:31.167980   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:31.168033   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:31.202813   65592 cri.go:89] found id: ""
	I1001 20:25:31.202843   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.202855   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:31.202864   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:31.202925   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:31.240424   65592 cri.go:89] found id: ""
	I1001 20:25:31.240459   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.240468   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:31.240474   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:31.240521   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:31.275470   65592 cri.go:89] found id: ""
	I1001 20:25:31.275502   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.275510   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:31.275518   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:31.275529   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:31.329604   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:31.329642   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:31.342695   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:31.342724   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:31.410169   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:31.410275   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:31.410303   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:31.489630   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:31.489677   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:34.027406   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:34.039902   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:34.039975   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:34.074992   65592 cri.go:89] found id: ""
	I1001 20:25:34.075025   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.075038   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:34.075045   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:34.075106   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:34.110264   65592 cri.go:89] found id: ""
	I1001 20:25:34.110293   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.110304   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:34.110311   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:34.110371   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:34.147097   65592 cri.go:89] found id: ""
	I1001 20:25:34.147132   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.147143   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:34.147151   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:34.147208   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:34.179453   65592 cri.go:89] found id: ""
	I1001 20:25:34.179481   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.179491   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:34.179500   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:34.179554   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:34.212407   65592 cri.go:89] found id: ""
	I1001 20:25:34.212433   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.212442   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:34.212449   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:34.212495   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:34.244400   65592 cri.go:89] found id: ""
	I1001 20:25:34.244429   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.244440   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:34.244447   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:34.244510   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:34.278423   65592 cri.go:89] found id: ""
	I1001 20:25:34.278448   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.278458   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:34.278464   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:34.278520   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:34.311019   65592 cri.go:89] found id: ""
	I1001 20:25:34.311049   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.311059   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:34.311072   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:34.311083   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:34.347521   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:34.347549   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:34.400717   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:34.400754   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:34.414550   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:34.414576   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:34.486478   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:34.486503   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:34.486519   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:37.071687   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:37.084941   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:37.085025   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:37.119834   65592 cri.go:89] found id: ""
	I1001 20:25:37.119862   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.119870   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:37.119875   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:37.119984   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:37.154795   65592 cri.go:89] found id: ""
	I1001 20:25:37.154832   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.154851   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:37.154867   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:37.154927   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:37.191552   65592 cri.go:89] found id: ""
	I1001 20:25:37.191581   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.191592   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:37.191599   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:37.191670   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:37.228883   65592 cri.go:89] found id: ""
	I1001 20:25:37.228918   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.228928   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:37.228936   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:37.229000   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:37.263533   65592 cri.go:89] found id: ""
	I1001 20:25:37.263558   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.263568   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:37.263577   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:37.263638   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:37.297367   65592 cri.go:89] found id: ""
	I1001 20:25:37.297401   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.297414   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:37.297422   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:37.297486   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:37.331091   65592 cri.go:89] found id: ""
	I1001 20:25:37.331121   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.331129   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:37.331135   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:37.331202   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:37.364861   65592 cri.go:89] found id: ""
	I1001 20:25:37.364889   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.364897   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:37.364905   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:37.364916   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:37.417507   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:37.417545   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:37.431613   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:37.431646   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:37.497821   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:37.497846   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:37.497861   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:37.578951   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:37.578996   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:40.121350   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:40.134553   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:40.134634   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:40.169277   65592 cri.go:89] found id: ""
	I1001 20:25:40.169313   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.169325   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:40.169333   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:40.169399   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:40.204111   65592 cri.go:89] found id: ""
	I1001 20:25:40.204144   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.204153   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:40.204159   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:40.204206   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:40.237841   65592 cri.go:89] found id: ""
	I1001 20:25:40.237872   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.237880   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:40.237886   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:40.237942   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:40.273081   65592 cri.go:89] found id: ""
	I1001 20:25:40.273108   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.273117   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:40.273123   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:40.273186   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:40.307351   65592 cri.go:89] found id: ""
	I1001 20:25:40.307384   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.307394   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:40.307399   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:40.307462   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:40.340543   65592 cri.go:89] found id: ""
	I1001 20:25:40.340569   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.340578   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:40.340584   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:40.340655   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:40.376070   65592 cri.go:89] found id: ""
	I1001 20:25:40.376112   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.376123   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:40.376130   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:40.376194   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:40.410236   65592 cri.go:89] found id: ""
	I1001 20:25:40.410267   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.410279   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:40.410289   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:40.410300   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:40.463799   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:40.463835   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:40.478403   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:40.478436   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:40.547250   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:40.547279   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:40.547291   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:40.630061   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:40.630098   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:43.170764   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:43.183046   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:43.183124   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:43.222995   65592 cri.go:89] found id: ""
	I1001 20:25:43.223029   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.223038   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:43.223044   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:43.223105   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:43.256861   65592 cri.go:89] found id: ""
	I1001 20:25:43.256891   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.256902   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:43.256910   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:43.257002   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:43.292643   65592 cri.go:89] found id: ""
	I1001 20:25:43.292687   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.292698   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:43.292704   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:43.292754   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:43.326539   65592 cri.go:89] found id: ""
	I1001 20:25:43.326568   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.326576   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:43.326582   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:43.326628   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:43.359787   65592 cri.go:89] found id: ""
	I1001 20:25:43.359813   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.359822   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:43.359828   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:43.359890   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:43.392045   65592 cri.go:89] found id: ""
	I1001 20:25:43.392076   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.392086   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:43.392092   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:43.392145   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:43.429498   65592 cri.go:89] found id: ""
	I1001 20:25:43.429529   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.429538   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:43.429544   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:43.429591   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:43.462728   65592 cri.go:89] found id: ""
	I1001 20:25:43.462760   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.462771   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:43.462781   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:43.462798   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:43.512683   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:43.512717   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:43.527253   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:43.527285   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:43.598963   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:43.598989   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:43.599003   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:43.679743   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:43.679790   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:46.217101   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:46.230349   65592 kubeadm.go:597] duration metric: took 4m1.895228035s to restartPrimaryControlPlane
	W1001 20:25:46.230421   65592 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1001 20:25:46.230450   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 20:25:47.271291   65592 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.040818559s)
	I1001 20:25:47.271362   65592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:25:47.285083   65592 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:25:47.295774   65592 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:25:47.305487   65592 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:25:47.305511   65592 kubeadm.go:157] found existing configuration files:
	
	I1001 20:25:47.305568   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:25:47.314488   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:25:47.314573   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:25:47.323852   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:25:47.332496   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:25:47.332553   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:25:47.341236   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:25:47.349932   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:25:47.350002   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:25:47.359345   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:25:47.369180   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:25:47.369233   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:25:47.378232   65592 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:25:47.595501   65592 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:27:43.940129   65592 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1001 20:27:43.940232   65592 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1001 20:27:43.942002   65592 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1001 20:27:43.942068   65592 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:27:43.942170   65592 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:27:43.942281   65592 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:27:43.942421   65592 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 20:27:43.942518   65592 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:27:43.944271   65592 out.go:235]   - Generating certificates and keys ...
	I1001 20:27:43.944389   65592 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:27:43.944486   65592 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:27:43.944600   65592 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 20:27:43.944693   65592 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 20:27:43.944797   65592 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 20:27:43.944888   65592 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 20:27:43.944985   65592 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 20:27:43.945072   65592 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 20:27:43.945190   65592 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 20:27:43.945301   65592 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 20:27:43.945361   65592 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 20:27:43.945420   65592 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:27:43.945467   65592 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:27:43.945515   65592 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:27:43.945585   65592 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:27:43.945651   65592 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:27:43.945772   65592 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:27:43.945899   65592 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:27:43.945961   65592 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:27:43.946057   65592 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:27:43.947517   65592 out.go:235]   - Booting up control plane ...
	I1001 20:27:43.947644   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:27:43.947767   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:27:43.947861   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:27:43.947978   65592 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:27:43.948185   65592 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 20:27:43.948258   65592 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1001 20:27:43.948396   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.948618   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.948695   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.948930   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.948991   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.949149   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.949232   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.949380   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.949439   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.949597   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.949616   65592 kubeadm.go:310] 
	I1001 20:27:43.949658   65592 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1001 20:27:43.949693   65592 kubeadm.go:310] 		timed out waiting for the condition
	I1001 20:27:43.949704   65592 kubeadm.go:310] 
	I1001 20:27:43.949737   65592 kubeadm.go:310] 	This error is likely caused by:
	I1001 20:27:43.949766   65592 kubeadm.go:310] 		- The kubelet is not running
	I1001 20:27:43.949863   65592 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1001 20:27:43.949871   65592 kubeadm.go:310] 
	I1001 20:27:43.949968   65592 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1001 20:27:43.950000   65592 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1001 20:27:43.950034   65592 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1001 20:27:43.950040   65592 kubeadm.go:310] 
	I1001 20:27:43.950136   65592 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1001 20:27:43.950207   65592 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1001 20:27:43.950213   65592 kubeadm.go:310] 
	I1001 20:27:43.950310   65592 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1001 20:27:43.950389   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1001 20:27:43.950454   65592 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1001 20:27:43.950533   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1001 20:27:43.950566   65592 kubeadm.go:310] 
	W1001 20:27:43.950665   65592 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1001 20:27:43.950707   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 20:27:44.404995   65592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:27:44.421130   65592 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:27:44.431204   65592 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:27:44.431228   65592 kubeadm.go:157] found existing configuration files:
	
	I1001 20:27:44.431270   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:27:44.440792   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:27:44.440857   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:27:44.450469   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:27:44.459640   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:27:44.459695   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:27:44.469335   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:27:44.478848   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:27:44.478904   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:27:44.489162   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:27:44.501070   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:27:44.501157   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:27:44.511970   65592 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:27:44.728685   65592 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:29:40.678676   65592 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1001 20:29:40.678797   65592 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1001 20:29:40.680563   65592 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1001 20:29:40.680613   65592 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:29:40.680680   65592 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:29:40.680788   65592 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:29:40.680868   65592 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 20:29:40.681030   65592 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:29:40.683042   65592 out.go:235]   - Generating certificates and keys ...
	I1001 20:29:40.683149   65592 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:29:40.683245   65592 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:29:40.683353   65592 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 20:29:40.683435   65592 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 20:29:40.683545   65592 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 20:29:40.683605   65592 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 20:29:40.683665   65592 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 20:29:40.683723   65592 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 20:29:40.683793   65592 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 20:29:40.683878   65592 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 20:29:40.683956   65592 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 20:29:40.684054   65592 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:29:40.684127   65592 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:29:40.684212   65592 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:29:40.684303   65592 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:29:40.684414   65592 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:29:40.684551   65592 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:29:40.684661   65592 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:29:40.684724   65592 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:29:40.684827   65592 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:29:40.686427   65592 out.go:235]   - Booting up control plane ...
	I1001 20:29:40.686534   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:29:40.686621   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:29:40.686710   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:29:40.686820   65592 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:29:40.686996   65592 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 20:29:40.687063   65592 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1001 20:29:40.687127   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.687336   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.687443   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.687674   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.687759   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.687958   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.688047   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.688212   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.688274   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.688510   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.688519   65592 kubeadm.go:310] 
	I1001 20:29:40.688566   65592 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1001 20:29:40.688610   65592 kubeadm.go:310] 		timed out waiting for the condition
	I1001 20:29:40.688617   65592 kubeadm.go:310] 
	I1001 20:29:40.688646   65592 kubeadm.go:310] 	This error is likely caused by:
	I1001 20:29:40.688680   65592 kubeadm.go:310] 		- The kubelet is not running
	I1001 20:29:40.688770   65592 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1001 20:29:40.688778   65592 kubeadm.go:310] 
	I1001 20:29:40.688882   65592 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1001 20:29:40.688937   65592 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1001 20:29:40.688986   65592 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1001 20:29:40.688996   65592 kubeadm.go:310] 
	I1001 20:29:40.689114   65592 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1001 20:29:40.689222   65592 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1001 20:29:40.689237   65592 kubeadm.go:310] 
	I1001 20:29:40.689376   65592 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1001 20:29:40.689517   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1001 20:29:40.689638   65592 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1001 20:29:40.689709   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1001 20:29:40.689786   65592 kubeadm.go:310] 
	I1001 20:29:40.689796   65592 kubeadm.go:394] duration metric: took 7m56.416911577s to StartCluster
	I1001 20:29:40.689838   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:29:40.689896   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:29:40.733027   65592 cri.go:89] found id: ""
	I1001 20:29:40.733059   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.733068   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:29:40.733073   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:29:40.733120   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:29:40.767975   65592 cri.go:89] found id: ""
	I1001 20:29:40.768010   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.768021   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:29:40.768029   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:29:40.768095   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:29:40.802624   65592 cri.go:89] found id: ""
	I1001 20:29:40.802657   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.802668   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:29:40.802676   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:29:40.802748   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:29:40.838109   65592 cri.go:89] found id: ""
	I1001 20:29:40.838142   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.838151   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:29:40.838157   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:29:40.838204   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:29:40.873083   65592 cri.go:89] found id: ""
	I1001 20:29:40.873112   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.873124   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:29:40.873131   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:29:40.873192   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:29:40.907675   65592 cri.go:89] found id: ""
	I1001 20:29:40.907705   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.907714   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:29:40.907720   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:29:40.907775   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:29:40.941641   65592 cri.go:89] found id: ""
	I1001 20:29:40.941669   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.941678   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:29:40.941691   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:29:40.941748   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:29:40.978189   65592 cri.go:89] found id: ""
	I1001 20:29:40.978216   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.978227   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:29:40.978238   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:29:40.978254   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:29:41.053798   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:29:41.053823   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:29:41.053835   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:29:41.160669   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:29:41.160715   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:29:41.218152   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:29:41.218182   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:29:41.274784   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:29:41.274821   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1001 20:29:41.288554   65592 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1001 20:29:41.288613   65592 out.go:270] * 
	* 
	W1001 20:29:41.288663   65592 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1001 20:29:41.288674   65592 out.go:270] * 
	* 
	W1001 20:29:41.289525   65592 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 20:29:41.292969   65592 out.go:201] 
	W1001 20:29:41.294238   65592 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1001 20:29:41.294278   65592 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1001 20:29:41.294297   65592 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1001 20:29:41.295783   65592 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-359369 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-359369 -n old-k8s-version-359369
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-359369 -n old-k8s-version-359369: exit status 2 (232.725749ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-359369 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-402897                              | cert-expiration-402897       | jenkins | v1.34.0 | 01 Oct 24 20:12 UTC | 01 Oct 24 20:12 UTC |
	| start   | -p embed-certs-106982                                  | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:12 UTC | 01 Oct 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-262337             | no-preload-262337            | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC | 01 Oct 24 20:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-262337                                   | no-preload-262337            | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-106982            | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC | 01 Oct 24 20:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-106982                                  | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:14 UTC | 01 Oct 24 20:14 UTC |
	| start   | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:14 UTC | 01 Oct 24 20:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC | 01 Oct 24 20:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-359369        | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-262337                  | no-preload-262337            | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-262337                                   | no-preload-262337            | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC | 01 Oct 24 20:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-106982                 | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-556200 | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:16 UTC |
	|         | disable-driver-mounts-556200                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:21 UTC |
	|         | default-k8s-diff-port-878552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-106982                                  | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-359369                              | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:17 UTC | 01 Oct 24 20:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-359369             | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:17 UTC | 01 Oct 24 20:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-359369                              | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-878552  | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:22 UTC | 01 Oct 24 20:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:22 UTC |                     |
	|         | default-k8s-diff-port-878552                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-878552       | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:24 UTC |                     |
	|         | default-k8s-diff-port-878552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 20:24:40
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 20:24:40.832961   68418 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:24:40.833061   68418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:24:40.833066   68418 out.go:358] Setting ErrFile to fd 2...
	I1001 20:24:40.833070   68418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:24:40.833265   68418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 20:24:40.833818   68418 out.go:352] Setting JSON to false
	I1001 20:24:40.834796   68418 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7623,"bootTime":1727806658,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 20:24:40.834894   68418 start.go:139] virtualization: kvm guest
	I1001 20:24:40.837148   68418 out.go:177] * [default-k8s-diff-port-878552] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 20:24:40.838511   68418 notify.go:220] Checking for updates...
	I1001 20:24:40.838551   68418 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 20:24:40.839938   68418 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 20:24:40.841161   68418 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:24:40.842268   68418 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:24:40.843373   68418 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 20:24:40.844538   68418 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 20:24:40.846141   68418 config.go:182] Loaded profile config "default-k8s-diff-port-878552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:24:40.846513   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:24:40.846561   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:24:40.862168   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42661
	I1001 20:24:40.862628   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:24:40.863294   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:24:40.863326   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:24:40.863699   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:24:40.863903   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:24:40.864180   68418 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 20:24:40.864548   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:24:40.864620   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:24:40.880173   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45711
	I1001 20:24:40.880719   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:24:40.881220   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:24:40.881245   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:24:40.881581   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:24:40.881795   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:24:40.920802   68418 out.go:177] * Using the kvm2 driver based on existing profile
	I1001 20:24:40.921986   68418 start.go:297] selected driver: kvm2
	I1001 20:24:40.921999   68418 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-878552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-878552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.4 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:24:40.922122   68418 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 20:24:40.922802   68418 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:24:40.922895   68418 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 20:24:40.938386   68418 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 20:24:40.938811   68418 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:24:40.938841   68418 cni.go:84] Creating CNI manager for ""
	I1001 20:24:40.938880   68418 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:24:40.938931   68418 start.go:340] cluster config:
	{Name:default-k8s-diff-port-878552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-878552 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.4 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:24:40.939036   68418 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:24:40.940656   68418 out.go:177] * Starting "default-k8s-diff-port-878552" primary control-plane node in "default-k8s-diff-port-878552" cluster
	I1001 20:24:40.941946   68418 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 20:24:40.942006   68418 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 20:24:40.942023   68418 cache.go:56] Caching tarball of preloaded images
	I1001 20:24:40.942155   68418 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 20:24:40.942166   68418 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 20:24:40.942298   68418 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/config.json ...
	I1001 20:24:40.942537   68418 start.go:360] acquireMachinesLock for default-k8s-diff-port-878552: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 20:24:40.942581   68418 start.go:364] duration metric: took 24.859µs to acquireMachinesLock for "default-k8s-diff-port-878552"
	I1001 20:24:40.942601   68418 start.go:96] Skipping create...Using existing machine configuration
	I1001 20:24:40.942608   68418 fix.go:54] fixHost starting: 
	I1001 20:24:40.942921   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:24:40.942954   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:24:40.958447   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36711
	I1001 20:24:40.958976   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:24:40.960190   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:24:40.960223   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:24:40.960575   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:24:40.960770   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:24:40.960921   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:24:40.962765   68418 fix.go:112] recreateIfNeeded on default-k8s-diff-port-878552: state=Running err=<nil>
	W1001 20:24:40.962786   68418 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 20:24:40.964520   68418 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-878552" VM ...
	I1001 20:24:37.763268   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:40.262669   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:39.025570   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:39.040932   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:39.041011   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:39.076620   65592 cri.go:89] found id: ""
	I1001 20:24:39.076649   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.076659   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:39.076666   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:39.076734   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:39.113395   65592 cri.go:89] found id: ""
	I1001 20:24:39.113422   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.113430   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:39.113436   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:39.113490   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:39.147839   65592 cri.go:89] found id: ""
	I1001 20:24:39.147877   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.147890   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:39.147899   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:39.147966   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:39.179721   65592 cri.go:89] found id: ""
	I1001 20:24:39.179758   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.179769   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:39.179777   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:39.179842   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:39.211511   65592 cri.go:89] found id: ""
	I1001 20:24:39.211541   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.211549   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:39.211554   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:39.211603   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:39.243517   65592 cri.go:89] found id: ""
	I1001 20:24:39.243544   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.243552   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:39.243557   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:39.243623   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:39.276159   65592 cri.go:89] found id: ""
	I1001 20:24:39.276182   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.276189   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:39.276195   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:39.276239   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:39.307242   65592 cri.go:89] found id: ""
	I1001 20:24:39.307274   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.307285   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:39.307295   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:39.307307   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:39.387442   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:39.387486   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:39.423123   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:39.423156   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:39.474648   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:39.474686   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:39.488129   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:39.488158   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:39.557478   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:42.058114   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:42.071979   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:42.072056   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:42.110529   65592 cri.go:89] found id: ""
	I1001 20:24:42.110557   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.110565   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:42.110570   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:42.110619   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:42.145408   65592 cri.go:89] found id: ""
	I1001 20:24:42.145436   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.145445   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:42.145450   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:42.145509   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:42.180602   65592 cri.go:89] found id: ""
	I1001 20:24:42.180641   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.180655   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:42.180664   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:42.180722   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:38.119187   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:40.619080   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:40.965599   68418 machine.go:93] provisionDockerMachine start ...
	I1001 20:24:40.965619   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:24:40.965852   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:24:40.968710   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:24:40.969253   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:20:43 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:24:40.969286   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:24:40.969517   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:24:40.969724   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:24:40.969960   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:24:40.970112   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:24:40.970316   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:24:40.970570   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:24:40.970584   68418 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 20:24:43.860755   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:24:42.262933   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:44.762857   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:42.214116   65592 cri.go:89] found id: ""
	I1001 20:24:42.214148   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.214160   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:42.214168   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:42.214224   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:42.246785   65592 cri.go:89] found id: ""
	I1001 20:24:42.246814   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.246825   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:42.246832   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:42.246900   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:42.281586   65592 cri.go:89] found id: ""
	I1001 20:24:42.281633   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.281645   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:42.281660   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:42.281724   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:42.318982   65592 cri.go:89] found id: ""
	I1001 20:24:42.319015   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.319025   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:42.319032   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:42.319085   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:42.350592   65592 cri.go:89] found id: ""
	I1001 20:24:42.350619   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.350638   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:42.350646   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:42.350659   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:42.429111   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:42.429152   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:42.466741   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:42.466775   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:42.516829   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:42.516870   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:42.530174   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:42.530201   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:42.600444   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:45.101469   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:45.113821   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:45.113904   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:45.148105   65592 cri.go:89] found id: ""
	I1001 20:24:45.148132   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.148146   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:45.148152   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:45.148196   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:45.180980   65592 cri.go:89] found id: ""
	I1001 20:24:45.181012   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.181027   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:45.181046   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:45.181113   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:45.216971   65592 cri.go:89] found id: ""
	I1001 20:24:45.217001   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.217010   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:45.217015   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:45.217060   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:45.252240   65592 cri.go:89] found id: ""
	I1001 20:24:45.252275   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.252287   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:45.252294   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:45.252354   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:45.287389   65592 cri.go:89] found id: ""
	I1001 20:24:45.287419   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.287434   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:45.287440   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:45.287501   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:45.319980   65592 cri.go:89] found id: ""
	I1001 20:24:45.320015   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.320027   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:45.320035   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:45.320101   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:45.351894   65592 cri.go:89] found id: ""
	I1001 20:24:45.351920   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.351931   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:45.351936   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:45.351984   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:45.385370   65592 cri.go:89] found id: ""
	I1001 20:24:45.385400   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.385412   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:45.385423   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:45.385485   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:45.449558   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:45.449584   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:45.449596   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:45.524322   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:45.524372   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:45.560729   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:45.560757   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:45.614098   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:45.614139   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:43.119614   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:45.121666   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:47.618362   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:46.932587   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:24:47.263384   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:49.761472   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:48.129944   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:48.143420   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:48.143496   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:48.175627   65592 cri.go:89] found id: ""
	I1001 20:24:48.175668   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.175682   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:48.175689   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:48.175747   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:48.210422   65592 cri.go:89] found id: ""
	I1001 20:24:48.210451   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.210462   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:48.210470   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:48.210535   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:48.243916   65592 cri.go:89] found id: ""
	I1001 20:24:48.243952   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.243963   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:48.243972   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:48.244027   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:48.275802   65592 cri.go:89] found id: ""
	I1001 20:24:48.275830   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.275845   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:48.275857   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:48.275917   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:48.311539   65592 cri.go:89] found id: ""
	I1001 20:24:48.311569   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.311579   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:48.311586   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:48.311648   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:48.342606   65592 cri.go:89] found id: ""
	I1001 20:24:48.342646   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.342658   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:48.342666   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:48.342718   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:48.375554   65592 cri.go:89] found id: ""
	I1001 20:24:48.375581   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.375591   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:48.375597   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:48.375642   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:48.407747   65592 cri.go:89] found id: ""
	I1001 20:24:48.407776   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.407789   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:48.407800   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:48.407814   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:48.457470   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:48.457503   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:48.470483   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:48.470517   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:48.533536   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:48.533565   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:48.533580   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:48.614530   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:48.614571   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:51.157091   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:51.170292   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:51.170364   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:51.203784   65592 cri.go:89] found id: ""
	I1001 20:24:51.203809   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.203822   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:51.203828   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:51.203917   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:51.239789   65592 cri.go:89] found id: ""
	I1001 20:24:51.239826   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.239834   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:51.239840   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:51.239889   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:51.274562   65592 cri.go:89] found id: ""
	I1001 20:24:51.274595   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.274607   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:51.274617   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:51.274701   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:51.306172   65592 cri.go:89] found id: ""
	I1001 20:24:51.306199   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.306207   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:51.306213   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:51.306269   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:51.339631   65592 cri.go:89] found id: ""
	I1001 20:24:51.339660   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.339668   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:51.339674   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:51.339725   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:51.372128   65592 cri.go:89] found id: ""
	I1001 20:24:51.372154   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.372163   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:51.372169   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:51.372223   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:51.403790   65592 cri.go:89] found id: ""
	I1001 20:24:51.403818   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.403828   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:51.403842   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:51.403890   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:51.437771   65592 cri.go:89] found id: ""
	I1001 20:24:51.437799   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.437808   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:51.437816   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:51.437827   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:51.489824   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:51.489864   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:51.503478   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:51.503508   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:51.573741   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:51.573768   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:51.573780   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:51.662355   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:51.662391   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:49.618685   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:51.619186   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:53.012639   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:24:51.761853   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:53.762442   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:56.261818   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:54.199747   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:54.212731   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:54.212797   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:54.244554   65592 cri.go:89] found id: ""
	I1001 20:24:54.244586   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.244596   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:54.244602   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:54.244652   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:54.280636   65592 cri.go:89] found id: ""
	I1001 20:24:54.280667   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.280679   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:54.280686   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:54.280737   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:54.318213   65592 cri.go:89] found id: ""
	I1001 20:24:54.318246   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.318257   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:54.318265   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:54.318321   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:54.353563   65592 cri.go:89] found id: ""
	I1001 20:24:54.353595   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.353606   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:54.353615   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:54.353678   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:54.387770   65592 cri.go:89] found id: ""
	I1001 20:24:54.387795   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.387803   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:54.387809   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:54.387869   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:54.421289   65592 cri.go:89] found id: ""
	I1001 20:24:54.421317   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.421325   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:54.421332   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:54.421382   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:54.456221   65592 cri.go:89] found id: ""
	I1001 20:24:54.456261   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.456274   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:54.456282   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:54.456348   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:54.488174   65592 cri.go:89] found id: ""
	I1001 20:24:54.488208   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.488219   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:54.488228   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:54.488241   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:54.540981   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:54.541020   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:54.554099   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:54.554129   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:54.623978   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:54.624013   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:54.624034   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:54.704703   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:54.704738   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:54.119129   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:56.619282   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:56.088698   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:24:58.262173   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:00.761865   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:57.241791   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:57.254771   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:57.254843   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:57.290226   65592 cri.go:89] found id: ""
	I1001 20:24:57.290263   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.290271   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:57.290277   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:57.290336   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:57.324910   65592 cri.go:89] found id: ""
	I1001 20:24:57.324938   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.324946   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:57.324951   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:57.325068   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:57.360553   65592 cri.go:89] found id: ""
	I1001 20:24:57.360586   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.360601   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:57.360608   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:57.360669   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:57.395182   65592 cri.go:89] found id: ""
	I1001 20:24:57.395216   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.395229   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:57.395236   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:57.395296   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:57.428967   65592 cri.go:89] found id: ""
	I1001 20:24:57.428998   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.429011   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:57.429017   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:57.429072   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:57.462483   65592 cri.go:89] found id: ""
	I1001 20:24:57.462511   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.462519   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:57.462525   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:57.462581   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:57.495505   65592 cri.go:89] found id: ""
	I1001 20:24:57.495538   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.495550   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:57.495556   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:57.495615   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:57.528132   65592 cri.go:89] found id: ""
	I1001 20:24:57.528164   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.528176   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:57.528188   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:57.528203   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:57.596557   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:57.596583   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:57.596598   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:57.676797   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:57.676830   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:57.714624   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:57.714653   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:57.763801   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:57.763839   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:00.277808   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:00.291432   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:00.291489   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:00.327524   65592 cri.go:89] found id: ""
	I1001 20:25:00.327554   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.327562   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:00.327568   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:00.327618   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:00.364125   65592 cri.go:89] found id: ""
	I1001 20:25:00.364153   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.364162   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:00.364167   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:00.364229   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:00.404507   65592 cri.go:89] found id: ""
	I1001 20:25:00.404543   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.404555   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:00.404564   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:00.404770   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:00.438761   65592 cri.go:89] found id: ""
	I1001 20:25:00.438792   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.438800   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:00.438807   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:00.438862   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:00.473263   65592 cri.go:89] found id: ""
	I1001 20:25:00.473301   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.473313   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:00.473321   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:00.473391   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:00.510276   65592 cri.go:89] found id: ""
	I1001 20:25:00.510307   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.510317   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:00.510324   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:00.510383   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:00.545118   65592 cri.go:89] found id: ""
	I1001 20:25:00.545149   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.545165   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:00.545173   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:00.545229   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:00.577773   65592 cri.go:89] found id: ""
	I1001 20:25:00.577799   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.577810   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:00.577821   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:00.577835   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:00.628978   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:00.629012   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:00.642192   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:00.642225   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:00.711399   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:00.711432   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:00.711446   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:00.792477   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:00.792514   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:59.118041   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:01.119565   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:02.164636   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:05.236638   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:02.762323   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:04.764910   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:03.332492   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:03.347542   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:03.347622   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:03.388263   65592 cri.go:89] found id: ""
	I1001 20:25:03.388292   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.388300   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:03.388306   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:03.388353   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:03.421489   65592 cri.go:89] found id: ""
	I1001 20:25:03.421525   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.421534   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:03.421539   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:03.421634   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:03.457139   65592 cri.go:89] found id: ""
	I1001 20:25:03.457172   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.457182   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:03.457189   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:03.457251   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:03.497203   65592 cri.go:89] found id: ""
	I1001 20:25:03.497232   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.497241   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:03.497247   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:03.497313   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:03.535137   65592 cri.go:89] found id: ""
	I1001 20:25:03.535163   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.535171   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:03.535176   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:03.535221   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:03.569131   65592 cri.go:89] found id: ""
	I1001 20:25:03.569158   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.569166   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:03.569171   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:03.569217   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:03.605289   65592 cri.go:89] found id: ""
	I1001 20:25:03.605321   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.605329   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:03.605336   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:03.605389   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:03.651086   65592 cri.go:89] found id: ""
	I1001 20:25:03.651115   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.651123   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:03.651134   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:03.651145   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:03.731256   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:03.731281   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:03.731299   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:03.809393   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:03.809442   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:03.849171   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:03.849198   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:03.898009   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:03.898045   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:06.411962   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:06.425432   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:06.425513   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:06.463339   65592 cri.go:89] found id: ""
	I1001 20:25:06.463371   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.463383   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:06.463391   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:06.463455   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:06.502527   65592 cri.go:89] found id: ""
	I1001 20:25:06.502561   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.502569   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:06.502611   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:06.502687   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:06.547428   65592 cri.go:89] found id: ""
	I1001 20:25:06.547465   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.547474   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:06.547480   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:06.547539   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:06.581672   65592 cri.go:89] found id: ""
	I1001 20:25:06.581699   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.581708   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:06.581713   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:06.581769   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:06.615391   65592 cri.go:89] found id: ""
	I1001 20:25:06.615436   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.615449   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:06.615457   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:06.615525   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:06.651019   65592 cri.go:89] found id: ""
	I1001 20:25:06.651050   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.651060   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:06.651067   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:06.651142   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:06.687887   65592 cri.go:89] found id: ""
	I1001 20:25:06.687912   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.687922   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:06.687929   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:06.687982   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:06.729234   65592 cri.go:89] found id: ""
	I1001 20:25:06.729263   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.729273   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:06.729282   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:06.729296   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:06.747295   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:06.747326   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:06.816480   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:06.816511   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:06.816524   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:06.896918   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:06.896957   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:06.938922   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:06.938958   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:03.619205   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:06.118575   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:06.765214   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:09.261806   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:11.262162   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:09.494252   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:09.508085   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:09.508171   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:09.542999   65592 cri.go:89] found id: ""
	I1001 20:25:09.543029   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.543037   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:09.543043   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:09.543100   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:09.578112   65592 cri.go:89] found id: ""
	I1001 20:25:09.578137   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.578145   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:09.578150   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:09.578199   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:09.613123   65592 cri.go:89] found id: ""
	I1001 20:25:09.613150   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.613158   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:09.613166   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:09.613223   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:09.648172   65592 cri.go:89] found id: ""
	I1001 20:25:09.648214   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.648223   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:09.648230   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:09.648302   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:09.681217   65592 cri.go:89] found id: ""
	I1001 20:25:09.681244   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.681254   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:09.681261   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:09.681320   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:09.718166   65592 cri.go:89] found id: ""
	I1001 20:25:09.718196   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.718204   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:09.718212   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:09.718272   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:09.751910   65592 cri.go:89] found id: ""
	I1001 20:25:09.751942   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.751951   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:09.751956   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:09.752004   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:09.789213   65592 cri.go:89] found id: ""
	I1001 20:25:09.789237   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.789246   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:09.789254   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:09.789265   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:09.826746   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:09.826780   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:09.879079   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:09.879123   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:09.892480   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:09.892507   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:09.967048   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:09.967084   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:09.967103   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:08.118822   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:10.120018   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:12.620582   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:14.356624   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:13.262286   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:15.263349   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:12.545057   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:12.557888   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:12.557969   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:12.594881   65592 cri.go:89] found id: ""
	I1001 20:25:12.594928   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.594942   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:12.594952   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:12.595021   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:12.631393   65592 cri.go:89] found id: ""
	I1001 20:25:12.631425   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.631437   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:12.631445   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:12.631504   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:12.666442   65592 cri.go:89] found id: ""
	I1001 20:25:12.666476   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.666486   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:12.666493   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:12.666548   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:12.703321   65592 cri.go:89] found id: ""
	I1001 20:25:12.703359   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.703371   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:12.703379   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:12.703444   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:12.742188   65592 cri.go:89] found id: ""
	I1001 20:25:12.742216   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.742224   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:12.742230   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:12.742276   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:12.781829   65592 cri.go:89] found id: ""
	I1001 20:25:12.781859   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.781869   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:12.781876   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:12.781940   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:12.815368   65592 cri.go:89] found id: ""
	I1001 20:25:12.815397   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.815405   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:12.815411   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:12.815463   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:12.850913   65592 cri.go:89] found id: ""
	I1001 20:25:12.850941   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.850949   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:12.850958   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:12.850968   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:12.901409   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:12.901443   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:12.914517   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:12.914567   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:12.980086   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:12.980119   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:12.980135   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:13.055950   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:13.055989   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:15.595692   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:15.609648   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:15.609728   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:15.645477   65592 cri.go:89] found id: ""
	I1001 20:25:15.645502   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.645510   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:15.645514   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:15.645558   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:15.679674   65592 cri.go:89] found id: ""
	I1001 20:25:15.679702   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.679711   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:15.679717   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:15.679774   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:15.718057   65592 cri.go:89] found id: ""
	I1001 20:25:15.718082   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.718092   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:15.718097   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:15.718153   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:15.754094   65592 cri.go:89] found id: ""
	I1001 20:25:15.754121   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.754130   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:15.754136   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:15.754189   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:15.790415   65592 cri.go:89] found id: ""
	I1001 20:25:15.790450   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.790464   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:15.790472   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:15.790535   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:15.825603   65592 cri.go:89] found id: ""
	I1001 20:25:15.825630   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.825645   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:15.825653   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:15.825717   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:15.861330   65592 cri.go:89] found id: ""
	I1001 20:25:15.861356   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.861368   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:15.861375   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:15.861451   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:15.897534   65592 cri.go:89] found id: ""
	I1001 20:25:15.897564   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.897575   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:15.897584   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:15.897598   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:15.972842   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:15.972881   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:16.010625   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:16.010653   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:16.062717   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:16.062762   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:16.076538   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:16.076568   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:16.156886   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:15.118878   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:17.119791   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:17.428649   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:17.764089   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:20.261752   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:18.657436   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:18.673018   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:18.673093   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:18.708040   65592 cri.go:89] found id: ""
	I1001 20:25:18.708078   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.708091   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:18.708100   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:18.708167   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:18.740152   65592 cri.go:89] found id: ""
	I1001 20:25:18.740188   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.740200   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:18.740207   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:18.740264   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:18.778238   65592 cri.go:89] found id: ""
	I1001 20:25:18.778270   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.778279   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:18.778287   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:18.778351   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:18.815450   65592 cri.go:89] found id: ""
	I1001 20:25:18.815489   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.815503   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:18.815512   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:18.815576   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:18.850008   65592 cri.go:89] found id: ""
	I1001 20:25:18.850038   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.850047   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:18.850053   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:18.850104   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:18.890919   65592 cri.go:89] found id: ""
	I1001 20:25:18.890943   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.890951   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:18.890957   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:18.891004   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:18.934196   65592 cri.go:89] found id: ""
	I1001 20:25:18.934228   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.934240   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:18.934247   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:18.934307   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:18.977817   65592 cri.go:89] found id: ""
	I1001 20:25:18.977850   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.977862   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:18.977875   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:18.977889   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:19.039867   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:19.039910   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:19.054277   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:19.054310   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:19.125736   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:19.125765   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:19.125782   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:19.208588   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:19.208622   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:21.750881   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:21.766638   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:21.766712   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:21.801906   65592 cri.go:89] found id: ""
	I1001 20:25:21.801930   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.801938   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:21.801944   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:21.801990   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:21.842801   65592 cri.go:89] found id: ""
	I1001 20:25:21.842830   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.842844   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:21.842852   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:21.842917   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:21.876550   65592 cri.go:89] found id: ""
	I1001 20:25:21.876577   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.876588   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:21.876594   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:21.876647   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:21.910972   65592 cri.go:89] found id: ""
	I1001 20:25:21.911007   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.911016   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:21.911022   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:21.911098   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:21.945721   65592 cri.go:89] found id: ""
	I1001 20:25:21.945753   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.945765   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:21.945773   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:21.945833   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:21.982101   65592 cri.go:89] found id: ""
	I1001 20:25:21.982131   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.982143   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:21.982151   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:21.982242   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:22.016526   65592 cri.go:89] found id: ""
	I1001 20:25:22.016558   65592 logs.go:276] 0 containers: []
	W1001 20:25:22.016569   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:22.016577   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:22.016632   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:22.054792   65592 cri.go:89] found id: ""
	I1001 20:25:22.054822   65592 logs.go:276] 0 containers: []
	W1001 20:25:22.054833   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:22.054844   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:22.054863   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:22.105936   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:22.105974   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:22.120834   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:22.120858   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:22.195177   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:22.195211   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:22.195228   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:19.120304   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:21.618511   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:23.512698   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:22.264134   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:24.762355   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:22.281244   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:22.281285   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:24.824197   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:24.840967   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:24.841030   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:24.882399   65592 cri.go:89] found id: ""
	I1001 20:25:24.882429   65592 logs.go:276] 0 containers: []
	W1001 20:25:24.882443   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:24.882449   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:24.882497   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:24.935548   65592 cri.go:89] found id: ""
	I1001 20:25:24.935581   65592 logs.go:276] 0 containers: []
	W1001 20:25:24.935590   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:24.935596   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:24.935644   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:24.976931   65592 cri.go:89] found id: ""
	I1001 20:25:24.976958   65592 logs.go:276] 0 containers: []
	W1001 20:25:24.976969   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:24.976976   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:24.977035   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:25.009926   65592 cri.go:89] found id: ""
	I1001 20:25:25.009959   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.009968   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:25.009975   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:25.010039   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:25.043261   65592 cri.go:89] found id: ""
	I1001 20:25:25.043299   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.043310   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:25.043316   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:25.043377   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:25.075177   65592 cri.go:89] found id: ""
	I1001 20:25:25.075205   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.075214   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:25.075221   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:25.075267   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:25.109792   65592 cri.go:89] found id: ""
	I1001 20:25:25.109832   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.109845   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:25.109871   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:25.109942   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:25.148721   65592 cri.go:89] found id: ""
	I1001 20:25:25.148753   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.148763   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:25.148772   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:25.148790   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:25.161802   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:25.161841   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:25.227699   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:25.227732   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:25.227750   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:25.314028   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:25.314075   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:25.354881   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:25.354919   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:23.618792   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:26.118493   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:26.580628   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:27.262584   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:29.761866   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:27.906936   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:27.920745   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:27.920806   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:27.955399   65592 cri.go:89] found id: ""
	I1001 20:25:27.955426   65592 logs.go:276] 0 containers: []
	W1001 20:25:27.955444   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:27.955450   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:27.955503   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:27.993714   65592 cri.go:89] found id: ""
	I1001 20:25:27.993747   65592 logs.go:276] 0 containers: []
	W1001 20:25:27.993759   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:27.993766   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:27.993827   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:28.028439   65592 cri.go:89] found id: ""
	I1001 20:25:28.028475   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.028487   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:28.028494   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:28.028563   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:28.072935   65592 cri.go:89] found id: ""
	I1001 20:25:28.072966   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.072977   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:28.072985   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:28.073050   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:28.107241   65592 cri.go:89] found id: ""
	I1001 20:25:28.107275   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.107285   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:28.107293   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:28.107357   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:28.141382   65592 cri.go:89] found id: ""
	I1001 20:25:28.141412   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.141423   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:28.141431   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:28.141494   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:28.175749   65592 cri.go:89] found id: ""
	I1001 20:25:28.175782   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.175794   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:28.175801   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:28.175864   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:28.214968   65592 cri.go:89] found id: ""
	I1001 20:25:28.214997   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.215006   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:28.215015   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:28.215027   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:28.259588   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:28.259619   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:28.314439   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:28.314480   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:28.327938   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:28.327967   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:28.399479   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:28.399508   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:28.399523   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:30.978863   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:30.991415   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:30.991493   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:31.026443   65592 cri.go:89] found id: ""
	I1001 20:25:31.026480   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.026494   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:31.026513   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:31.026576   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:31.060635   65592 cri.go:89] found id: ""
	I1001 20:25:31.060663   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.060678   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:31.060684   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:31.060743   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:31.095494   65592 cri.go:89] found id: ""
	I1001 20:25:31.095525   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.095533   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:31.095540   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:31.095587   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:31.130693   65592 cri.go:89] found id: ""
	I1001 20:25:31.130718   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.130728   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:31.130741   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:31.130802   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:31.167928   65592 cri.go:89] found id: ""
	I1001 20:25:31.167960   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.167973   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:31.167980   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:31.168033   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:31.202813   65592 cri.go:89] found id: ""
	I1001 20:25:31.202843   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.202855   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:31.202864   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:31.202925   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:31.240424   65592 cri.go:89] found id: ""
	I1001 20:25:31.240459   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.240468   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:31.240474   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:31.240521   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:31.275470   65592 cri.go:89] found id: ""
	I1001 20:25:31.275502   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.275510   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:31.275518   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:31.275529   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:31.329604   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:31.329642   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:31.342695   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:31.342724   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:31.410169   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:31.410275   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:31.410303   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:31.489630   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:31.489677   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:28.118608   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:30.118718   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:32.119227   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:32.660640   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:35.732653   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:31.762062   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:33.764597   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:36.263251   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:34.027406   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:34.039902   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:34.039975   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:34.074992   65592 cri.go:89] found id: ""
	I1001 20:25:34.075025   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.075038   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:34.075045   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:34.075106   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:34.110264   65592 cri.go:89] found id: ""
	I1001 20:25:34.110293   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.110304   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:34.110311   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:34.110371   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:34.147097   65592 cri.go:89] found id: ""
	I1001 20:25:34.147132   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.147143   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:34.147151   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:34.147208   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:34.179453   65592 cri.go:89] found id: ""
	I1001 20:25:34.179481   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.179491   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:34.179500   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:34.179554   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:34.212407   65592 cri.go:89] found id: ""
	I1001 20:25:34.212433   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.212442   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:34.212449   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:34.212495   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:34.244400   65592 cri.go:89] found id: ""
	I1001 20:25:34.244429   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.244440   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:34.244447   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:34.244510   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:34.278423   65592 cri.go:89] found id: ""
	I1001 20:25:34.278448   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.278458   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:34.278464   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:34.278520   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:34.311019   65592 cri.go:89] found id: ""
	I1001 20:25:34.311049   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.311059   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:34.311072   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:34.311083   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:34.347521   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:34.347549   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:34.400717   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:34.400754   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:34.414550   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:34.414576   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:34.486478   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:34.486503   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:34.486519   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:37.071687   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:37.084941   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:37.085025   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:37.119834   65592 cri.go:89] found id: ""
	I1001 20:25:37.119862   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.119870   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:37.119875   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:37.119984   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:37.154795   65592 cri.go:89] found id: ""
	I1001 20:25:37.154832   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.154851   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:37.154867   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:37.154927   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:37.191552   65592 cri.go:89] found id: ""
	I1001 20:25:37.191581   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.191592   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:37.191599   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:37.191670   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:34.119370   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:36.119698   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:38.761540   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:40.762894   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:37.228883   65592 cri.go:89] found id: ""
	I1001 20:25:37.228918   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.228928   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:37.228936   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:37.229000   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:37.263533   65592 cri.go:89] found id: ""
	I1001 20:25:37.263558   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.263568   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:37.263577   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:37.263638   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:37.297367   65592 cri.go:89] found id: ""
	I1001 20:25:37.297401   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.297414   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:37.297422   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:37.297486   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:37.331091   65592 cri.go:89] found id: ""
	I1001 20:25:37.331121   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.331129   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:37.331135   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:37.331202   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:37.364861   65592 cri.go:89] found id: ""
	I1001 20:25:37.364889   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.364897   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:37.364905   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:37.364916   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:37.417507   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:37.417545   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:37.431613   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:37.431646   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:37.497821   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:37.497846   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:37.497861   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:37.578951   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:37.578996   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:40.121350   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:40.134553   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:40.134634   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:40.169277   65592 cri.go:89] found id: ""
	I1001 20:25:40.169313   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.169325   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:40.169333   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:40.169399   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:40.204111   65592 cri.go:89] found id: ""
	I1001 20:25:40.204144   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.204153   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:40.204159   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:40.204206   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:40.237841   65592 cri.go:89] found id: ""
	I1001 20:25:40.237872   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.237880   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:40.237886   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:40.237942   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:40.273081   65592 cri.go:89] found id: ""
	I1001 20:25:40.273108   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.273117   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:40.273123   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:40.273186   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:40.307351   65592 cri.go:89] found id: ""
	I1001 20:25:40.307384   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.307394   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:40.307399   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:40.307462   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:40.340543   65592 cri.go:89] found id: ""
	I1001 20:25:40.340569   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.340578   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:40.340584   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:40.340655   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:40.376070   65592 cri.go:89] found id: ""
	I1001 20:25:40.376112   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.376123   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:40.376130   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:40.376194   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:40.410236   65592 cri.go:89] found id: ""
	I1001 20:25:40.410267   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.410279   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:40.410289   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:40.410300   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:40.463799   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:40.463835   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:40.478403   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:40.478436   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:40.547250   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:40.547279   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:40.547291   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:40.630061   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:40.630098   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:38.617891   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:40.618430   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:41.612771   65263 pod_ready.go:82] duration metric: took 4m0.000338317s for pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace to be "Ready" ...
	E1001 20:25:41.612803   65263 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace to be "Ready" (will not retry!)
	I1001 20:25:41.612832   65263 pod_ready.go:39] duration metric: took 4m13.169141642s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:25:41.612859   65263 kubeadm.go:597] duration metric: took 4m21.203039001s to restartPrimaryControlPlane
	W1001 20:25:41.612919   65263 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1001 20:25:41.612944   65263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 20:25:41.812689   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:44.884661   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:43.264334   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:45.762034   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:43.170764   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:43.183046   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:43.183124   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:43.222995   65592 cri.go:89] found id: ""
	I1001 20:25:43.223029   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.223038   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:43.223044   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:43.223105   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:43.256861   65592 cri.go:89] found id: ""
	I1001 20:25:43.256891   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.256902   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:43.256910   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:43.257002   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:43.292643   65592 cri.go:89] found id: ""
	I1001 20:25:43.292687   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.292698   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:43.292704   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:43.292754   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:43.326539   65592 cri.go:89] found id: ""
	I1001 20:25:43.326568   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.326576   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:43.326582   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:43.326628   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:43.359787   65592 cri.go:89] found id: ""
	I1001 20:25:43.359813   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.359822   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:43.359828   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:43.359890   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:43.392045   65592 cri.go:89] found id: ""
	I1001 20:25:43.392076   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.392086   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:43.392092   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:43.392145   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:43.429498   65592 cri.go:89] found id: ""
	I1001 20:25:43.429529   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.429538   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:43.429544   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:43.429591   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:43.462728   65592 cri.go:89] found id: ""
	I1001 20:25:43.462760   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.462771   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:43.462781   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:43.462798   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:43.512683   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:43.512717   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:43.527253   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:43.527285   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:43.598963   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:43.598989   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:43.599003   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:43.679743   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:43.679790   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:46.217101   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:46.230349   65592 kubeadm.go:597] duration metric: took 4m1.895228035s to restartPrimaryControlPlane
	W1001 20:25:46.230421   65592 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1001 20:25:46.230450   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 20:25:47.762241   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:49.763115   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:47.271291   65592 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.040818559s)
	I1001 20:25:47.271362   65592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:25:47.285083   65592 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:25:47.295774   65592 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:25:47.305487   65592 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:25:47.305511   65592 kubeadm.go:157] found existing configuration files:
	
	I1001 20:25:47.305568   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:25:47.314488   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:25:47.314573   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:25:47.323852   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:25:47.332496   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:25:47.332553   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:25:47.341236   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:25:47.349932   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:25:47.350002   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:25:47.359345   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:25:47.369180   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:25:47.369233   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:25:47.378232   65592 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:25:47.595501   65592 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:25:50.964640   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:54.036635   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:52.261890   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:54.761886   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:00.116640   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:57.261837   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:59.262445   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:01.262529   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:03.188675   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:03.762361   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:06.261749   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:07.708438   65263 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.095470945s)
	I1001 20:26:07.708514   65263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:26:07.722982   65263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:26:07.732118   65263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:26:07.741172   65263 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:26:07.741198   65263 kubeadm.go:157] found existing configuration files:
	
	I1001 20:26:07.741244   65263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:26:07.749683   65263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:26:07.749744   65263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:26:07.758875   65263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:26:07.767668   65263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:26:07.767739   65263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:26:07.776648   65263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:26:07.785930   65263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:26:07.785982   65263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:26:07.794739   65263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:26:07.803180   65263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:26:07.803241   65263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:26:07.812178   65263 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:26:07.851817   65263 kubeadm.go:310] W1001 20:26:07.836874    2546 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:26:07.852402   65263 kubeadm.go:310] W1001 20:26:07.837670    2546 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:26:09.272541   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:08.761247   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:10.761797   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:07.957551   65263 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:26:12.344653   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:16.385918   65263 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 20:26:16.385979   65263 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:26:16.386062   65263 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:26:16.386172   65263 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:26:16.386297   65263 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 20:26:16.386400   65263 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:26:16.387827   65263 out.go:235]   - Generating certificates and keys ...
	I1001 20:26:16.387909   65263 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:26:16.387989   65263 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:26:16.388104   65263 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 20:26:16.388191   65263 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 20:26:16.388284   65263 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 20:26:16.388370   65263 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 20:26:16.388464   65263 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 20:26:16.388545   65263 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 20:26:16.388646   65263 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 20:26:16.388775   65263 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 20:26:16.388824   65263 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 20:26:16.388908   65263 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:26:16.388956   65263 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:26:16.389006   65263 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 20:26:16.389048   65263 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:26:16.389117   65263 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:26:16.389201   65263 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:26:16.389333   65263 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:26:16.389444   65263 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:26:16.390823   65263 out.go:235]   - Booting up control plane ...
	I1001 20:26:16.390917   65263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:26:16.390992   65263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:26:16.391061   65263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:26:16.391161   65263 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:26:16.391285   65263 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:26:16.391335   65263 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:26:16.391468   65263 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 20:26:16.391572   65263 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 20:26:16.391628   65263 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.349149ms
	I1001 20:26:16.391686   65263 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 20:26:16.391736   65263 kubeadm.go:310] [api-check] The API server is healthy after 5.002046172s
	I1001 20:26:16.391818   65263 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 20:26:16.391923   65263 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 20:26:16.391999   65263 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 20:26:16.392169   65263 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-106982 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 20:26:16.392225   65263 kubeadm.go:310] [bootstrap-token] Using token: xlxn2k.owwnzt3amr4nx0st
	I1001 20:26:16.393437   65263 out.go:235]   - Configuring RBAC rules ...
	I1001 20:26:16.393539   65263 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 20:26:16.393609   65263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 20:26:16.393722   65263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 20:26:16.393834   65263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 20:26:16.393940   65263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 20:26:16.394017   65263 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 20:26:16.394117   65263 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 20:26:16.394154   65263 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 20:26:16.394195   65263 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 20:26:16.394200   65263 kubeadm.go:310] 
	I1001 20:26:16.394259   65263 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 20:26:16.394269   65263 kubeadm.go:310] 
	I1001 20:26:16.394335   65263 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 20:26:16.394341   65263 kubeadm.go:310] 
	I1001 20:26:16.394363   65263 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 20:26:16.394440   65263 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 20:26:16.394496   65263 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 20:26:16.394502   65263 kubeadm.go:310] 
	I1001 20:26:16.394553   65263 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 20:26:16.394559   65263 kubeadm.go:310] 
	I1001 20:26:16.394601   65263 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 20:26:16.394611   65263 kubeadm.go:310] 
	I1001 20:26:16.394656   65263 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 20:26:16.394720   65263 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 20:26:16.394804   65263 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 20:26:16.394814   65263 kubeadm.go:310] 
	I1001 20:26:16.394901   65263 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 20:26:16.394996   65263 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 20:26:16.395010   65263 kubeadm.go:310] 
	I1001 20:26:16.395128   65263 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xlxn2k.owwnzt3amr4nx0st \
	I1001 20:26:16.395262   65263 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 \
	I1001 20:26:16.395299   65263 kubeadm.go:310] 	--control-plane 
	I1001 20:26:16.395308   65263 kubeadm.go:310] 
	I1001 20:26:16.395426   65263 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 20:26:16.395436   65263 kubeadm.go:310] 
	I1001 20:26:16.395548   65263 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xlxn2k.owwnzt3amr4nx0st \
	I1001 20:26:16.395648   65263 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 
	I1001 20:26:16.395658   65263 cni.go:84] Creating CNI manager for ""
	I1001 20:26:16.395665   65263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:26:16.396852   65263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 20:26:12.763435   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:15.262381   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:16.398081   65263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 20:26:16.407920   65263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 20:26:16.428213   65263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 20:26:16.428312   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:16.428344   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-106982 minikube.k8s.io/updated_at=2024_10_01T20_26_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=embed-certs-106982 minikube.k8s.io/primary=true
	I1001 20:26:16.667876   65263 ops.go:34] apiserver oom_adj: -16
	I1001 20:26:16.667891   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:17.168194   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:17.668772   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:18.168815   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:18.668087   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:19.168767   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:19.668624   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:20.167974   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:20.668002   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:20.758486   65263 kubeadm.go:1113] duration metric: took 4.330238814s to wait for elevateKubeSystemPrivileges
	I1001 20:26:20.758520   65263 kubeadm.go:394] duration metric: took 5m0.403602376s to StartCluster
	I1001 20:26:20.758539   65263 settings.go:142] acquiring lock: {Name:mkeb3fe64ed992373f048aef24eb0d675bcab60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:26:20.758613   65263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:26:20.760430   65263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/kubeconfig: {Name:mk7ef19d546928001abf2478a3abe1d17765a591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:26:20.760678   65263 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 20:26:20.760746   65263 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 20:26:20.760852   65263 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-106982"
	I1001 20:26:20.760881   65263 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-106982"
	I1001 20:26:20.760877   65263 addons.go:69] Setting default-storageclass=true in profile "embed-certs-106982"
	W1001 20:26:20.760893   65263 addons.go:243] addon storage-provisioner should already be in state true
	I1001 20:26:20.760891   65263 addons.go:69] Setting metrics-server=true in profile "embed-certs-106982"
	I1001 20:26:20.760926   65263 host.go:66] Checking if "embed-certs-106982" exists ...
	I1001 20:26:20.760926   65263 addons.go:234] Setting addon metrics-server=true in "embed-certs-106982"
	W1001 20:26:20.761009   65263 addons.go:243] addon metrics-server should already be in state true
	I1001 20:26:20.761041   65263 host.go:66] Checking if "embed-certs-106982" exists ...
	I1001 20:26:20.760906   65263 config.go:182] Loaded profile config "embed-certs-106982": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:26:20.760902   65263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-106982"
	I1001 20:26:20.761374   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.761426   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.761429   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.761468   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.761545   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.761591   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.762861   65263 out.go:177] * Verifying Kubernetes components...
	I1001 20:26:20.764393   65263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:26:20.778448   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38619
	I1001 20:26:20.779031   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.779198   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39861
	I1001 20:26:20.779632   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.779657   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.779822   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.780085   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.780331   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.780352   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.780789   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.780829   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.781030   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.781240   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetState
	I1001 20:26:20.781260   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37397
	I1001 20:26:20.781672   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.782168   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.782189   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.782587   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.783037   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.783073   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.784573   65263 addons.go:234] Setting addon default-storageclass=true in "embed-certs-106982"
	W1001 20:26:20.784589   65263 addons.go:243] addon default-storageclass should already be in state true
	I1001 20:26:20.784609   65263 host.go:66] Checking if "embed-certs-106982" exists ...
	I1001 20:26:20.784877   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.784912   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.797787   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37113
	I1001 20:26:20.797864   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33201
	I1001 20:26:20.798261   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.798311   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.798836   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.798855   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.798931   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.798951   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.799226   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I1001 20:26:20.799230   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.799367   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.799409   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetState
	I1001 20:26:20.799515   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetState
	I1001 20:26:20.799695   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.800114   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.800130   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.800602   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.801316   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.801331   65263 main.go:141] libmachine: (embed-certs-106982) Calling .DriverName
	I1001 20:26:20.801351   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.801391   65263 main.go:141] libmachine: (embed-certs-106982) Calling .DriverName
	I1001 20:26:20.803237   65263 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1001 20:26:20.803241   65263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:26:18.420597   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:17.762869   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:20.262479   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:20.804378   65263 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1001 20:26:20.804394   65263 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1001 20:26:20.804411   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHHostname
	I1001 20:26:20.804571   65263 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:26:20.804586   65263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 20:26:20.804603   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHHostname
	I1001 20:26:20.808458   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.808866   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.808906   65263 main.go:141] libmachine: (embed-certs-106982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:ab:67", ip: ""} in network mk-embed-certs-106982: {Iface:virbr1 ExpiryTime:2024-10-01 21:21:05 +0000 UTC Type:0 Mac:52:54:00:7f:ab:67 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:embed-certs-106982 Clientid:01:52:54:00:7f:ab:67}
	I1001 20:26:20.808923   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined IP address 192.168.39.203 and MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.809183   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHPort
	I1001 20:26:20.809326   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHKeyPath
	I1001 20:26:20.809462   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHUsername
	I1001 20:26:20.809582   65263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/embed-certs-106982/id_rsa Username:docker}
	I1001 20:26:20.809917   65263 main.go:141] libmachine: (embed-certs-106982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:ab:67", ip: ""} in network mk-embed-certs-106982: {Iface:virbr1 ExpiryTime:2024-10-01 21:21:05 +0000 UTC Type:0 Mac:52:54:00:7f:ab:67 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:embed-certs-106982 Clientid:01:52:54:00:7f:ab:67}
	I1001 20:26:20.809941   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined IP address 192.168.39.203 and MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.809975   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHPort
	I1001 20:26:20.810172   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHKeyPath
	I1001 20:26:20.810320   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHUsername
	I1001 20:26:20.810498   65263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/embed-certs-106982/id_rsa Username:docker}
	I1001 20:26:20.818676   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33699
	I1001 20:26:20.819066   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.819574   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.819596   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.819900   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.820110   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetState
	I1001 20:26:20.821633   65263 main.go:141] libmachine: (embed-certs-106982) Calling .DriverName
	I1001 20:26:20.821820   65263 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 20:26:20.821834   65263 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 20:26:20.821852   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHHostname
	I1001 20:26:20.824684   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.825165   65263 main.go:141] libmachine: (embed-certs-106982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:ab:67", ip: ""} in network mk-embed-certs-106982: {Iface:virbr1 ExpiryTime:2024-10-01 21:21:05 +0000 UTC Type:0 Mac:52:54:00:7f:ab:67 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:embed-certs-106982 Clientid:01:52:54:00:7f:ab:67}
	I1001 20:26:20.825205   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined IP address 192.168.39.203 and MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.825425   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHPort
	I1001 20:26:20.825577   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHKeyPath
	I1001 20:26:20.825697   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHUsername
	I1001 20:26:20.825835   65263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/embed-certs-106982/id_rsa Username:docker}
	I1001 20:26:20.984756   65263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:26:21.014051   65263 node_ready.go:35] waiting up to 6m0s for node "embed-certs-106982" to be "Ready" ...
	I1001 20:26:21.023227   65263 node_ready.go:49] node "embed-certs-106982" has status "Ready":"True"
	I1001 20:26:21.023274   65263 node_ready.go:38] duration metric: took 9.170523ms for node "embed-certs-106982" to be "Ready" ...
	I1001 20:26:21.023286   65263 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:26:21.029371   65263 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:21.113480   65263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1001 20:26:21.113509   65263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1001 20:26:21.138000   65263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1001 20:26:21.138028   65263 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1001 20:26:21.162057   65263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:26:21.240772   65263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 20:26:21.251310   65263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 20:26:21.251337   65263 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1001 20:26:21.316994   65263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 20:26:22.282775   65263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.041963655s)
	I1001 20:26:22.282809   65263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.120713974s)
	I1001 20:26:22.282835   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.282849   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.282849   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.282864   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.283226   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.283243   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.283256   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.283265   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.283244   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.283298   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.283311   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.283275   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.283278   65263 main.go:141] libmachine: (embed-certs-106982) DBG | Closing plugin on server side
	I1001 20:26:22.283808   65263 main.go:141] libmachine: (embed-certs-106982) DBG | Closing plugin on server side
	I1001 20:26:22.283808   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.283839   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.283892   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.283907   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.342382   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.342407   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.342708   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.342732   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.434882   65263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.117844425s)
	I1001 20:26:22.434937   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.434950   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.435276   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.435291   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.435301   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.435309   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.435554   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.435582   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.435593   65263 addons.go:475] Verifying addon metrics-server=true in "embed-certs-106982"
	I1001 20:26:22.437796   65263 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1001 20:26:22.438856   65263 addons.go:510] duration metric: took 1.678119807s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1001 20:26:21.492616   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:22.263077   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:24.761931   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:23.036676   65263 pod_ready.go:103] pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:25.537836   65263 pod_ready.go:103] pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:26.536827   65263 pod_ready.go:93] pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:26.536853   65263 pod_ready.go:82] duration metric: took 5.507455172s for pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:26.536865   65263 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wfdwp" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:26.541397   65263 pod_ready.go:93] pod "coredns-7c65d6cfc9-wfdwp" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:26.541427   65263 pod_ready.go:82] duration metric: took 4.554335ms for pod "coredns-7c65d6cfc9-wfdwp" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:26.541436   65263 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.048586   65263 pod_ready.go:93] pod "etcd-embed-certs-106982" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.048612   65263 pod_ready.go:82] duration metric: took 507.170207ms for pod "etcd-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.048622   65263 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.053967   65263 pod_ready.go:93] pod "kube-apiserver-embed-certs-106982" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.053994   65263 pod_ready.go:82] duration metric: took 5.365871ms for pod "kube-apiserver-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.054007   65263 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.059419   65263 pod_ready.go:93] pod "kube-controller-manager-embed-certs-106982" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.059441   65263 pod_ready.go:82] duration metric: took 5.427863ms for pod "kube-controller-manager-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.059452   65263 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fjnvc" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.333488   65263 pod_ready.go:93] pod "kube-proxy-fjnvc" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.333512   65263 pod_ready.go:82] duration metric: took 274.054021ms for pod "kube-proxy-fjnvc" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.333521   65263 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.733368   65263 pod_ready.go:93] pod "kube-scheduler-embed-certs-106982" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.733392   65263 pod_ready.go:82] duration metric: took 399.861423ms for pod "kube-scheduler-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.733400   65263 pod_ready.go:39] duration metric: took 6.710101442s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:26:27.733422   65263 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:26:27.733476   65263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:26:27.750336   65263 api_server.go:72] duration metric: took 6.989620923s to wait for apiserver process to appear ...
	I1001 20:26:27.750367   65263 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:26:27.750389   65263 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I1001 20:26:27.755350   65263 api_server.go:279] https://192.168.39.203:8443/healthz returned 200:
	ok
	I1001 20:26:27.756547   65263 api_server.go:141] control plane version: v1.31.1
	I1001 20:26:27.756572   65263 api_server.go:131] duration metric: took 6.196295ms to wait for apiserver health ...
	I1001 20:26:27.756583   65263 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:26:27.937329   65263 system_pods.go:59] 9 kube-system pods found
	I1001 20:26:27.937364   65263 system_pods.go:61] "coredns-7c65d6cfc9-rq5ms" [652fcc3d-ae12-4e11-b212-8891c1c05701] Running
	I1001 20:26:27.937373   65263 system_pods.go:61] "coredns-7c65d6cfc9-wfdwp" [1174cd48-6855-4813-9ecd-3b3a82386720] Running
	I1001 20:26:27.937380   65263 system_pods.go:61] "etcd-embed-certs-106982" [84d678ad-7322-48d0-8bab-6c683d3cf8a5] Running
	I1001 20:26:27.937386   65263 system_pods.go:61] "kube-apiserver-embed-certs-106982" [93d7fba8-306f-4b04-b65b-e3d4442f9ba6] Running
	I1001 20:26:27.937392   65263 system_pods.go:61] "kube-controller-manager-embed-certs-106982" [5e405af0-a942-4040-a955-8a007c2fc6e9] Running
	I1001 20:26:27.937396   65263 system_pods.go:61] "kube-proxy-fjnvc" [728b1b90-5961-45e9-9818-8fc6f6db1634] Running
	I1001 20:26:27.937402   65263 system_pods.go:61] "kube-scheduler-embed-certs-106982" [c0289891-9235-44de-a3cb-669648f5c18e] Running
	I1001 20:26:27.937416   65263 system_pods.go:61] "metrics-server-6867b74b74-z27sl" [dd1b4cdc-5ffb-4214-96c9-569ef4f7ba09] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:26:27.937427   65263 system_pods.go:61] "storage-provisioner" [3aaab1f2-8361-46c6-88be-ed9004628715] Running
	I1001 20:26:27.937441   65263 system_pods.go:74] duration metric: took 180.849735ms to wait for pod list to return data ...
	I1001 20:26:27.937453   65263 default_sa.go:34] waiting for default service account to be created ...
	I1001 20:26:28.133918   65263 default_sa.go:45] found service account: "default"
	I1001 20:26:28.133945   65263 default_sa.go:55] duration metric: took 196.482206ms for default service account to be created ...
	I1001 20:26:28.133955   65263 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 20:26:28.335883   65263 system_pods.go:86] 9 kube-system pods found
	I1001 20:26:28.335916   65263 system_pods.go:89] "coredns-7c65d6cfc9-rq5ms" [652fcc3d-ae12-4e11-b212-8891c1c05701] Running
	I1001 20:26:28.335923   65263 system_pods.go:89] "coredns-7c65d6cfc9-wfdwp" [1174cd48-6855-4813-9ecd-3b3a82386720] Running
	I1001 20:26:28.335927   65263 system_pods.go:89] "etcd-embed-certs-106982" [84d678ad-7322-48d0-8bab-6c683d3cf8a5] Running
	I1001 20:26:28.335931   65263 system_pods.go:89] "kube-apiserver-embed-certs-106982" [93d7fba8-306f-4b04-b65b-e3d4442f9ba6] Running
	I1001 20:26:28.335935   65263 system_pods.go:89] "kube-controller-manager-embed-certs-106982" [5e405af0-a942-4040-a955-8a007c2fc6e9] Running
	I1001 20:26:28.335939   65263 system_pods.go:89] "kube-proxy-fjnvc" [728b1b90-5961-45e9-9818-8fc6f6db1634] Running
	I1001 20:26:28.335942   65263 system_pods.go:89] "kube-scheduler-embed-certs-106982" [c0289891-9235-44de-a3cb-669648f5c18e] Running
	I1001 20:26:28.335947   65263 system_pods.go:89] "metrics-server-6867b74b74-z27sl" [dd1b4cdc-5ffb-4214-96c9-569ef4f7ba09] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:26:28.335951   65263 system_pods.go:89] "storage-provisioner" [3aaab1f2-8361-46c6-88be-ed9004628715] Running
	I1001 20:26:28.335959   65263 system_pods.go:126] duration metric: took 202.000148ms to wait for k8s-apps to be running ...
	I1001 20:26:28.335967   65263 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 20:26:28.336013   65263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:26:28.350578   65263 system_svc.go:56] duration metric: took 14.603568ms WaitForService to wait for kubelet
	I1001 20:26:28.350608   65263 kubeadm.go:582] duration metric: took 7.589898283s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:26:28.350630   65263 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:26:28.533508   65263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:26:28.533533   65263 node_conditions.go:123] node cpu capacity is 2
	I1001 20:26:28.533544   65263 node_conditions.go:105] duration metric: took 182.908473ms to run NodePressure ...
	I1001 20:26:28.533554   65263 start.go:241] waiting for startup goroutines ...
	I1001 20:26:28.533561   65263 start.go:246] waiting for cluster config update ...
	I1001 20:26:28.533571   65263 start.go:255] writing updated cluster config ...
	I1001 20:26:28.533862   65263 ssh_runner.go:195] Run: rm -f paused
	I1001 20:26:28.580991   65263 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 20:26:28.583612   65263 out.go:177] * Done! kubectl is now configured to use "embed-certs-106982" cluster and "default" namespace by default
	I1001 20:26:27.572585   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:30.648588   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:27.262297   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:29.761795   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:31.762340   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:34.261713   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:35.263742   64676 pod_ready.go:82] duration metric: took 4m0.008218565s for pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace to be "Ready" ...
	E1001 20:26:35.263766   64676 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1001 20:26:35.263774   64676 pod_ready.go:39] duration metric: took 4m6.044360969s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:26:35.263791   64676 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:26:35.263820   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:26:35.263879   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:26:35.314427   64676 cri.go:89] found id: "a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:35.314450   64676 cri.go:89] found id: ""
	I1001 20:26:35.314457   64676 logs.go:276] 1 containers: [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd]
	I1001 20:26:35.314510   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.319554   64676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:26:35.319627   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:26:35.352986   64676 cri.go:89] found id: "586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:35.353006   64676 cri.go:89] found id: ""
	I1001 20:26:35.353013   64676 logs.go:276] 1 containers: [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f]
	I1001 20:26:35.353061   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.356979   64676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:26:35.357044   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:26:35.397175   64676 cri.go:89] found id: "4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:35.397196   64676 cri.go:89] found id: ""
	I1001 20:26:35.397203   64676 logs.go:276] 1 containers: [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6]
	I1001 20:26:35.397250   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.401025   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:26:35.401108   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:26:35.434312   64676 cri.go:89] found id: "89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:35.434333   64676 cri.go:89] found id: ""
	I1001 20:26:35.434340   64676 logs.go:276] 1 containers: [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d]
	I1001 20:26:35.434400   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.438325   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:26:35.438385   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:26:35.480711   64676 cri.go:89] found id: "fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:35.480738   64676 cri.go:89] found id: ""
	I1001 20:26:35.480750   64676 logs.go:276] 1 containers: [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8]
	I1001 20:26:35.480795   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.484996   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:26:35.485073   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:26:35.524876   64676 cri.go:89] found id: "69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:35.524909   64676 cri.go:89] found id: ""
	I1001 20:26:35.524920   64676 logs.go:276] 1 containers: [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf]
	I1001 20:26:35.524984   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.529297   64676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:26:35.529366   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:26:35.564110   64676 cri.go:89] found id: ""
	I1001 20:26:35.564138   64676 logs.go:276] 0 containers: []
	W1001 20:26:35.564149   64676 logs.go:278] No container was found matching "kindnet"
	I1001 20:26:35.564157   64676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1001 20:26:35.564222   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1001 20:26:35.599279   64676 cri.go:89] found id: "a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:35.599311   64676 cri.go:89] found id: "652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:35.599318   64676 cri.go:89] found id: ""
	I1001 20:26:35.599327   64676 logs.go:276] 2 containers: [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b]
	I1001 20:26:35.599379   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.603377   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.607668   64676 logs.go:123] Gathering logs for kubelet ...
	I1001 20:26:35.607698   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:26:35.678017   64676 logs.go:123] Gathering logs for coredns [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6] ...
	I1001 20:26:35.678053   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:35.717814   64676 logs.go:123] Gathering logs for kube-proxy [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8] ...
	I1001 20:26:35.717842   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:35.752647   64676 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:26:35.752680   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:26:36.259582   64676 logs.go:123] Gathering logs for container status ...
	I1001 20:26:36.259630   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:26:36.299857   64676 logs.go:123] Gathering logs for storage-provisioner [652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b] ...
	I1001 20:26:36.299892   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:36.339923   64676 logs.go:123] Gathering logs for dmesg ...
	I1001 20:26:36.339973   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:26:36.353728   64676 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:26:36.353763   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 20:26:36.728608   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:39.796591   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:36.482029   64676 logs.go:123] Gathering logs for kube-apiserver [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd] ...
	I1001 20:26:36.482071   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:36.525705   64676 logs.go:123] Gathering logs for etcd [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f] ...
	I1001 20:26:36.525741   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:36.566494   64676 logs.go:123] Gathering logs for kube-scheduler [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d] ...
	I1001 20:26:36.566529   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:36.602489   64676 logs.go:123] Gathering logs for kube-controller-manager [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf] ...
	I1001 20:26:36.602523   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:36.666726   64676 logs.go:123] Gathering logs for storage-provisioner [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa] ...
	I1001 20:26:36.666757   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:39.203217   64676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:26:39.220220   64676 api_server.go:72] duration metric: took 4m17.274155342s to wait for apiserver process to appear ...
	I1001 20:26:39.220253   64676 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:26:39.220301   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:26:39.220372   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:26:39.261710   64676 cri.go:89] found id: "a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:39.261739   64676 cri.go:89] found id: ""
	I1001 20:26:39.261749   64676 logs.go:276] 1 containers: [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd]
	I1001 20:26:39.261804   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.265994   64676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:26:39.266057   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:26:39.298615   64676 cri.go:89] found id: "586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:39.298642   64676 cri.go:89] found id: ""
	I1001 20:26:39.298650   64676 logs.go:276] 1 containers: [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f]
	I1001 20:26:39.298694   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.302584   64676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:26:39.302647   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:26:39.338062   64676 cri.go:89] found id: "4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:39.338091   64676 cri.go:89] found id: ""
	I1001 20:26:39.338102   64676 logs.go:276] 1 containers: [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6]
	I1001 20:26:39.338157   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.342553   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:26:39.342613   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:26:39.379787   64676 cri.go:89] found id: "89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:39.379818   64676 cri.go:89] found id: ""
	I1001 20:26:39.379828   64676 logs.go:276] 1 containers: [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d]
	I1001 20:26:39.379885   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.384397   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:26:39.384454   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:26:39.419175   64676 cri.go:89] found id: "fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:39.419204   64676 cri.go:89] found id: ""
	I1001 20:26:39.419215   64676 logs.go:276] 1 containers: [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8]
	I1001 20:26:39.419275   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.423113   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:26:39.423184   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:26:39.455948   64676 cri.go:89] found id: "69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:39.455974   64676 cri.go:89] found id: ""
	I1001 20:26:39.455984   64676 logs.go:276] 1 containers: [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf]
	I1001 20:26:39.456040   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.459912   64676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:26:39.459978   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:26:39.504152   64676 cri.go:89] found id: ""
	I1001 20:26:39.504179   64676 logs.go:276] 0 containers: []
	W1001 20:26:39.504187   64676 logs.go:278] No container was found matching "kindnet"
	I1001 20:26:39.504192   64676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1001 20:26:39.504241   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1001 20:26:39.538918   64676 cri.go:89] found id: "a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:39.538940   64676 cri.go:89] found id: "652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:39.538947   64676 cri.go:89] found id: ""
	I1001 20:26:39.538957   64676 logs.go:276] 2 containers: [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b]
	I1001 20:26:39.539013   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.542832   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.546365   64676 logs.go:123] Gathering logs for storage-provisioner [652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b] ...
	I1001 20:26:39.546395   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:39.589286   64676 logs.go:123] Gathering logs for kubelet ...
	I1001 20:26:39.589320   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:26:39.657412   64676 logs.go:123] Gathering logs for dmesg ...
	I1001 20:26:39.657447   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:26:39.671553   64676 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:26:39.671581   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 20:26:39.786194   64676 logs.go:123] Gathering logs for kube-apiserver [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd] ...
	I1001 20:26:39.786226   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:39.829798   64676 logs.go:123] Gathering logs for coredns [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6] ...
	I1001 20:26:39.829831   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:39.865854   64676 logs.go:123] Gathering logs for kube-controller-manager [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf] ...
	I1001 20:26:39.865890   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:39.920702   64676 logs.go:123] Gathering logs for storage-provisioner [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa] ...
	I1001 20:26:39.920735   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:39.959343   64676 logs.go:123] Gathering logs for etcd [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f] ...
	I1001 20:26:39.959375   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:40.001320   64676 logs.go:123] Gathering logs for kube-scheduler [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d] ...
	I1001 20:26:40.001354   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:40.037182   64676 logs.go:123] Gathering logs for kube-proxy [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8] ...
	I1001 20:26:40.037214   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:40.070072   64676 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:26:40.070098   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:26:40.492733   64676 logs.go:123] Gathering logs for container status ...
	I1001 20:26:40.492770   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:26:43.042801   64676 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I1001 20:26:43.048223   64676 api_server.go:279] https://192.168.61.93:8443/healthz returned 200:
	ok
	I1001 20:26:43.049199   64676 api_server.go:141] control plane version: v1.31.1
	I1001 20:26:43.049229   64676 api_server.go:131] duration metric: took 3.828968104s to wait for apiserver health ...
	I1001 20:26:43.049239   64676 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:26:43.049267   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:26:43.049331   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:26:43.087098   64676 cri.go:89] found id: "a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:43.087132   64676 cri.go:89] found id: ""
	I1001 20:26:43.087144   64676 logs.go:276] 1 containers: [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd]
	I1001 20:26:43.087206   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.091606   64676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:26:43.091665   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:26:43.127154   64676 cri.go:89] found id: "586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:43.127177   64676 cri.go:89] found id: ""
	I1001 20:26:43.127184   64676 logs.go:276] 1 containers: [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f]
	I1001 20:26:43.127227   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.131246   64676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:26:43.131320   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:26:43.165473   64676 cri.go:89] found id: "4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:43.165503   64676 cri.go:89] found id: ""
	I1001 20:26:43.165514   64676 logs.go:276] 1 containers: [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6]
	I1001 20:26:43.165577   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.169908   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:26:43.169982   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:26:43.210196   64676 cri.go:89] found id: "89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:43.210225   64676 cri.go:89] found id: ""
	I1001 20:26:43.210235   64676 logs.go:276] 1 containers: [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d]
	I1001 20:26:43.210302   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.214253   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:26:43.214317   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:26:43.249533   64676 cri.go:89] found id: "fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:43.249555   64676 cri.go:89] found id: ""
	I1001 20:26:43.249563   64676 logs.go:276] 1 containers: [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8]
	I1001 20:26:43.249625   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.253555   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:26:43.253633   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:26:43.294711   64676 cri.go:89] found id: "69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:43.294734   64676 cri.go:89] found id: ""
	I1001 20:26:43.294742   64676 logs.go:276] 1 containers: [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf]
	I1001 20:26:43.294787   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.298960   64676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:26:43.299037   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:26:43.339542   64676 cri.go:89] found id: ""
	I1001 20:26:43.339572   64676 logs.go:276] 0 containers: []
	W1001 20:26:43.339582   64676 logs.go:278] No container was found matching "kindnet"
	I1001 20:26:43.339588   64676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1001 20:26:43.339667   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1001 20:26:43.382206   64676 cri.go:89] found id: "a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:43.382230   64676 cri.go:89] found id: "652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:43.382234   64676 cri.go:89] found id: ""
	I1001 20:26:43.382241   64676 logs.go:276] 2 containers: [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b]
	I1001 20:26:43.382289   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.386473   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.390146   64676 logs.go:123] Gathering logs for kubelet ...
	I1001 20:26:43.390172   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:26:43.457659   64676 logs.go:123] Gathering logs for dmesg ...
	I1001 20:26:43.457699   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:26:43.471078   64676 logs.go:123] Gathering logs for kube-apiserver [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd] ...
	I1001 20:26:43.471109   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:43.518058   64676 logs.go:123] Gathering logs for etcd [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f] ...
	I1001 20:26:43.518093   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:43.559757   64676 logs.go:123] Gathering logs for kube-proxy [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8] ...
	I1001 20:26:43.559788   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:43.595485   64676 logs.go:123] Gathering logs for storage-provisioner [652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b] ...
	I1001 20:26:43.595513   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:43.628167   64676 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:26:43.628195   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 20:26:43.741206   64676 logs.go:123] Gathering logs for coredns [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6] ...
	I1001 20:26:43.741234   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:43.777220   64676 logs.go:123] Gathering logs for kube-scheduler [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d] ...
	I1001 20:26:43.777248   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:43.817507   64676 logs.go:123] Gathering logs for kube-controller-manager [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf] ...
	I1001 20:26:43.817536   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:43.880127   64676 logs.go:123] Gathering logs for storage-provisioner [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa] ...
	I1001 20:26:43.880161   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:43.915172   64676 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:26:43.915199   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:26:44.289237   64676 logs.go:123] Gathering logs for container status ...
	I1001 20:26:44.289277   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:26:46.835363   64676 system_pods.go:59] 8 kube-system pods found
	I1001 20:26:46.835393   64676 system_pods.go:61] "coredns-7c65d6cfc9-g8jf8" [7fbddef1-a564-4ee8-ab53-ae838d0fd984] Running
	I1001 20:26:46.835398   64676 system_pods.go:61] "etcd-no-preload-262337" [086d7949-d20d-49d8-871d-a464de60e4cb] Running
	I1001 20:26:46.835402   64676 system_pods.go:61] "kube-apiserver-no-preload-262337" [d8473136-4e07-43e2-bd20-65232e2d5102] Running
	I1001 20:26:46.835405   64676 system_pods.go:61] "kube-controller-manager-no-preload-262337" [63c7d071-20cd-48c5-b410-b78e339b0731] Running
	I1001 20:26:46.835408   64676 system_pods.go:61] "kube-proxy-7rrkn" [e25a055c-0203-4fe7-8801-560b9cdb27bb] Running
	I1001 20:26:46.835412   64676 system_pods.go:61] "kube-scheduler-no-preload-262337" [3b962e64-eea6-4c24-a230-32c40106a4dd] Running
	I1001 20:26:46.835418   64676 system_pods.go:61] "metrics-server-6867b74b74-2rpwt" [235515ab-28fc-437b-983a-243f7a8fb183] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:26:46.835422   64676 system_pods.go:61] "storage-provisioner" [8832193a-39b4-49b9-b943-3241bb27fb8d] Running
	I1001 20:26:46.835431   64676 system_pods.go:74] duration metric: took 3.786183909s to wait for pod list to return data ...
	I1001 20:26:46.835441   64676 default_sa.go:34] waiting for default service account to be created ...
	I1001 20:26:46.838345   64676 default_sa.go:45] found service account: "default"
	I1001 20:26:46.838367   64676 default_sa.go:55] duration metric: took 2.918089ms for default service account to be created ...
	I1001 20:26:46.838375   64676 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 20:26:46.844822   64676 system_pods.go:86] 8 kube-system pods found
	I1001 20:26:46.844850   64676 system_pods.go:89] "coredns-7c65d6cfc9-g8jf8" [7fbddef1-a564-4ee8-ab53-ae838d0fd984] Running
	I1001 20:26:46.844856   64676 system_pods.go:89] "etcd-no-preload-262337" [086d7949-d20d-49d8-871d-a464de60e4cb] Running
	I1001 20:26:46.844860   64676 system_pods.go:89] "kube-apiserver-no-preload-262337" [d8473136-4e07-43e2-bd20-65232e2d5102] Running
	I1001 20:26:46.844863   64676 system_pods.go:89] "kube-controller-manager-no-preload-262337" [63c7d071-20cd-48c5-b410-b78e339b0731] Running
	I1001 20:26:46.844867   64676 system_pods.go:89] "kube-proxy-7rrkn" [e25a055c-0203-4fe7-8801-560b9cdb27bb] Running
	I1001 20:26:46.844870   64676 system_pods.go:89] "kube-scheduler-no-preload-262337" [3b962e64-eea6-4c24-a230-32c40106a4dd] Running
	I1001 20:26:46.844876   64676 system_pods.go:89] "metrics-server-6867b74b74-2rpwt" [235515ab-28fc-437b-983a-243f7a8fb183] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:26:46.844881   64676 system_pods.go:89] "storage-provisioner" [8832193a-39b4-49b9-b943-3241bb27fb8d] Running
	I1001 20:26:46.844889   64676 system_pods.go:126] duration metric: took 6.508902ms to wait for k8s-apps to be running ...
	I1001 20:26:46.844895   64676 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 20:26:46.844934   64676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:26:46.861543   64676 system_svc.go:56] duration metric: took 16.63712ms WaitForService to wait for kubelet
	I1001 20:26:46.861586   64676 kubeadm.go:582] duration metric: took 4m24.915538002s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:26:46.861614   64676 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:26:46.864599   64676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:26:46.864632   64676 node_conditions.go:123] node cpu capacity is 2
	I1001 20:26:46.864644   64676 node_conditions.go:105] duration metric: took 3.023838ms to run NodePressure ...
	I1001 20:26:46.864657   64676 start.go:241] waiting for startup goroutines ...
	I1001 20:26:46.864667   64676 start.go:246] waiting for cluster config update ...
	I1001 20:26:46.864682   64676 start.go:255] writing updated cluster config ...
	I1001 20:26:46.864960   64676 ssh_runner.go:195] Run: rm -f paused
	I1001 20:26:46.924982   64676 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 20:26:46.926817   64676 out.go:177] * Done! kubectl is now configured to use "no-preload-262337" cluster and "default" namespace by default
	I1001 20:26:45.880599   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:48.948631   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:55.028660   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:58.100570   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:04.180661   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:07.252656   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:13.332644   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:16.404640   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:22.484714   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:25.556606   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:31.636609   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:34.712725   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:40.788632   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:43.940129   65592 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1001 20:27:43.940232   65592 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1001 20:27:43.942002   65592 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1001 20:27:43.942068   65592 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:27:43.942170   65592 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:27:43.942281   65592 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:27:43.942421   65592 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 20:27:43.942518   65592 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:27:43.944271   65592 out.go:235]   - Generating certificates and keys ...
	I1001 20:27:43.944389   65592 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:27:43.944486   65592 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:27:43.944600   65592 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 20:27:43.944693   65592 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 20:27:43.944797   65592 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 20:27:43.944888   65592 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 20:27:43.944985   65592 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 20:27:43.945072   65592 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 20:27:43.945190   65592 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 20:27:43.945301   65592 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 20:27:43.945361   65592 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 20:27:43.945420   65592 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:27:43.945467   65592 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:27:43.945515   65592 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:27:43.945585   65592 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:27:43.945651   65592 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:27:43.945772   65592 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:27:43.945899   65592 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:27:43.945961   65592 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:27:43.946057   65592 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:27:43.860704   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:43.947517   65592 out.go:235]   - Booting up control plane ...
	I1001 20:27:43.947644   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:27:43.947767   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:27:43.947861   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:27:43.947978   65592 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:27:43.948185   65592 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 20:27:43.948258   65592 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1001 20:27:43.948396   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.948618   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.948695   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.948930   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.948991   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.949149   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.949232   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.949380   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.949439   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.949597   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.949616   65592 kubeadm.go:310] 
	I1001 20:27:43.949658   65592 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1001 20:27:43.949693   65592 kubeadm.go:310] 		timed out waiting for the condition
	I1001 20:27:43.949704   65592 kubeadm.go:310] 
	I1001 20:27:43.949737   65592 kubeadm.go:310] 	This error is likely caused by:
	I1001 20:27:43.949766   65592 kubeadm.go:310] 		- The kubelet is not running
	I1001 20:27:43.949863   65592 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1001 20:27:43.949871   65592 kubeadm.go:310] 
	I1001 20:27:43.949968   65592 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1001 20:27:43.950000   65592 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1001 20:27:43.950034   65592 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1001 20:27:43.950040   65592 kubeadm.go:310] 
	I1001 20:27:43.950136   65592 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1001 20:27:43.950207   65592 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1001 20:27:43.950213   65592 kubeadm.go:310] 
	I1001 20:27:43.950310   65592 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1001 20:27:43.950389   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1001 20:27:43.950454   65592 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1001 20:27:43.950533   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1001 20:27:43.950566   65592 kubeadm.go:310] 
	W1001 20:27:43.950665   65592 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1001 20:27:43.950707   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 20:27:44.404995   65592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:27:44.421130   65592 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:27:44.431204   65592 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:27:44.431228   65592 kubeadm.go:157] found existing configuration files:
	
	I1001 20:27:44.431270   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:27:44.440792   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:27:44.440857   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:27:44.450469   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:27:44.459640   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:27:44.459695   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:27:44.469335   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:27:44.478848   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:27:44.478904   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:27:44.489162   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:27:44.501070   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:27:44.501157   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:27:44.511970   65592 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:27:44.728685   65592 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:27:49.940611   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:53.016657   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:59.092700   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:02.164611   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:08.244707   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:11.316686   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:17.400607   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:20.468660   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:26.548687   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:29.624608   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:35.700638   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:38.772693   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:44.852721   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:47.924690   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:54.004674   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:57.080644   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:03.156750   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:06.232700   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:12.308749   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:15.380633   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:18.381649   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:29:18.381689   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetMachineName
	I1001 20:29:18.382037   68418 buildroot.go:166] provisioning hostname "default-k8s-diff-port-878552"
	I1001 20:29:18.382063   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetMachineName
	I1001 20:29:18.382291   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:18.384714   68418 machine.go:96] duration metric: took 4m37.419094583s to provisionDockerMachine
	I1001 20:29:18.384772   68418 fix.go:56] duration metric: took 4m37.442164125s for fixHost
	I1001 20:29:18.384782   68418 start.go:83] releasing machines lock for "default-k8s-diff-port-878552", held for 4m37.442187455s
	W1001 20:29:18.384813   68418 start.go:714] error starting host: provision: host is not running
	W1001 20:29:18.384993   68418 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1001 20:29:18.385017   68418 start.go:729] Will try again in 5 seconds ...
	I1001 20:29:23.387086   68418 start.go:360] acquireMachinesLock for default-k8s-diff-port-878552: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 20:29:23.387232   68418 start.go:364] duration metric: took 101.596µs to acquireMachinesLock for "default-k8s-diff-port-878552"
	I1001 20:29:23.387273   68418 start.go:96] Skipping create...Using existing machine configuration
	I1001 20:29:23.387284   68418 fix.go:54] fixHost starting: 
	I1001 20:29:23.387645   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:29:23.387669   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:29:23.403371   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39149
	I1001 20:29:23.404008   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:29:23.404580   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:29:23.404603   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:29:23.405181   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:29:23.405410   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:23.405560   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:29:23.407563   68418 fix.go:112] recreateIfNeeded on default-k8s-diff-port-878552: state=Stopped err=<nil>
	I1001 20:29:23.407589   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	W1001 20:29:23.407771   68418 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 20:29:23.409721   68418 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-878552" ...
	I1001 20:29:23.410973   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Start
	I1001 20:29:23.411207   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Ensuring networks are active...
	I1001 20:29:23.412117   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Ensuring network default is active
	I1001 20:29:23.412576   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Ensuring network mk-default-k8s-diff-port-878552 is active
	I1001 20:29:23.412956   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Getting domain xml...
	I1001 20:29:23.413589   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Creating domain...
	I1001 20:29:24.744972   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting to get IP...
	I1001 20:29:24.746001   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:24.746641   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:24.746710   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:24.746607   69521 retry.go:31] will retry after 260.966833ms: waiting for machine to come up
	I1001 20:29:25.009284   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.009825   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.009849   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:25.009778   69521 retry.go:31] will retry after 308.10041ms: waiting for machine to come up
	I1001 20:29:25.319153   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.319717   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.319752   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:25.319652   69521 retry.go:31] will retry after 342.802984ms: waiting for machine to come up
	I1001 20:29:25.664405   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.664893   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.664920   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:25.664816   69521 retry.go:31] will retry after 397.002924ms: waiting for machine to come up
	I1001 20:29:26.063628   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:26.064235   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:26.064259   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:26.064201   69521 retry.go:31] will retry after 526.648832ms: waiting for machine to come up
	I1001 20:29:26.592834   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:26.593284   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:26.593307   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:26.593226   69521 retry.go:31] will retry after 642.569388ms: waiting for machine to come up
	I1001 20:29:27.237224   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:27.237775   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:27.237808   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:27.237714   69521 retry.go:31] will retry after 963.05932ms: waiting for machine to come up
	I1001 20:29:28.202841   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:28.203333   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:28.203363   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:28.203287   69521 retry.go:31] will retry after 1.372004234s: waiting for machine to come up
	I1001 20:29:29.577175   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:29.577678   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:29.577706   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:29.577627   69521 retry.go:31] will retry after 1.693508507s: waiting for machine to come up
	I1001 20:29:31.273758   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:31.274247   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:31.274274   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:31.274201   69521 retry.go:31] will retry after 1.793304779s: waiting for machine to come up
	I1001 20:29:33.069467   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:33.069894   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:33.069915   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:33.069861   69521 retry.go:31] will retry after 2.825253867s: waiting for machine to come up
	I1001 20:29:40.678676   65592 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1001 20:29:40.678797   65592 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1001 20:29:40.680563   65592 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1001 20:29:40.680613   65592 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:29:40.680680   65592 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:29:40.680788   65592 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:29:40.680868   65592 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 20:29:40.681030   65592 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:29:40.683042   65592 out.go:235]   - Generating certificates and keys ...
	I1001 20:29:40.683149   65592 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:29:40.683245   65592 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:29:40.683353   65592 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 20:29:40.683435   65592 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 20:29:40.683545   65592 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 20:29:40.683605   65592 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 20:29:40.683665   65592 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 20:29:40.683723   65592 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 20:29:40.683793   65592 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 20:29:40.683878   65592 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 20:29:40.683956   65592 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 20:29:40.684054   65592 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:29:40.684127   65592 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:29:40.684212   65592 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:29:40.684303   65592 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:29:40.684414   65592 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:29:40.684551   65592 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:29:40.684661   65592 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:29:40.684724   65592 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:29:40.684827   65592 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:29:35.897417   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:35.897916   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:35.897949   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:35.897862   69521 retry.go:31] will retry after 3.519866937s: waiting for machine to come up
	I1001 20:29:39.419142   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:39.419528   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:39.419554   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:39.419494   69521 retry.go:31] will retry after 3.507101438s: waiting for machine to come up
	I1001 20:29:40.686427   65592 out.go:235]   - Booting up control plane ...
	I1001 20:29:40.686534   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:29:40.686621   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:29:40.686710   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:29:40.686820   65592 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:29:40.686996   65592 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 20:29:40.687063   65592 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1001 20:29:40.687127   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.687336   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.687443   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.687674   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.687759   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.687958   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.688047   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.688212   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.688274   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.688510   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.688519   65592 kubeadm.go:310] 
	I1001 20:29:40.688566   65592 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1001 20:29:40.688610   65592 kubeadm.go:310] 		timed out waiting for the condition
	I1001 20:29:40.688617   65592 kubeadm.go:310] 
	I1001 20:29:40.688646   65592 kubeadm.go:310] 	This error is likely caused by:
	I1001 20:29:40.688680   65592 kubeadm.go:310] 		- The kubelet is not running
	I1001 20:29:40.688770   65592 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1001 20:29:40.688778   65592 kubeadm.go:310] 
	I1001 20:29:40.688882   65592 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1001 20:29:40.688937   65592 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1001 20:29:40.688986   65592 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1001 20:29:40.688996   65592 kubeadm.go:310] 
	I1001 20:29:40.689114   65592 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1001 20:29:40.689222   65592 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1001 20:29:40.689237   65592 kubeadm.go:310] 
	I1001 20:29:40.689376   65592 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1001 20:29:40.689517   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1001 20:29:40.689638   65592 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1001 20:29:40.689709   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1001 20:29:40.689786   65592 kubeadm.go:310] 
	I1001 20:29:40.689796   65592 kubeadm.go:394] duration metric: took 7m56.416911577s to StartCluster
	I1001 20:29:40.689838   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:29:40.689896   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:29:40.733027   65592 cri.go:89] found id: ""
	I1001 20:29:40.733059   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.733068   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:29:40.733073   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:29:40.733120   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:29:40.767975   65592 cri.go:89] found id: ""
	I1001 20:29:40.768010   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.768021   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:29:40.768029   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:29:40.768095   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:29:40.802624   65592 cri.go:89] found id: ""
	I1001 20:29:40.802657   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.802668   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:29:40.802676   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:29:40.802748   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:29:40.838109   65592 cri.go:89] found id: ""
	I1001 20:29:40.838142   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.838151   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:29:40.838157   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:29:40.838204   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:29:40.873083   65592 cri.go:89] found id: ""
	I1001 20:29:40.873112   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.873124   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:29:40.873131   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:29:40.873192   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:29:40.907675   65592 cri.go:89] found id: ""
	I1001 20:29:40.907705   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.907714   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:29:40.907720   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:29:40.907775   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:29:40.941641   65592 cri.go:89] found id: ""
	I1001 20:29:40.941669   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.941678   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:29:40.941691   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:29:40.941748   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:29:40.978189   65592 cri.go:89] found id: ""
	I1001 20:29:40.978216   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.978227   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:29:40.978238   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:29:40.978254   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:29:41.053798   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:29:41.053823   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:29:41.053835   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:29:41.160669   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:29:41.160715   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:29:41.218152   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:29:41.218182   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:29:41.274784   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:29:41.274821   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1001 20:29:41.288554   65592 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1001 20:29:41.288613   65592 out.go:270] * 
	W1001 20:29:41.288663   65592 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1001 20:29:41.288674   65592 out.go:270] * 
	W1001 20:29:41.289525   65592 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 20:29:41.292969   65592 out.go:201] 
	W1001 20:29:41.294238   65592 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1001 20:29:41.294278   65592 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1001 20:29:41.294297   65592 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1001 20:29:41.295783   65592 out.go:201] 
	
	
	==> CRI-O <==
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.370751119Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814582370732691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=264a5628-2a00-47d8-946d-a644fc5d739a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.371284439Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fee51ea6-ec34-475c-b0ce-115c528f7243 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.371335644Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fee51ea6-ec34-475c-b0ce-115c528f7243 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.371365605Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fee51ea6-ec34-475c-b0ce-115c528f7243 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.403884006Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=751e65d7-b647-4287-8a56-5205018069fa name=/runtime.v1.RuntimeService/Version
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.403953742Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=751e65d7-b647-4287-8a56-5205018069fa name=/runtime.v1.RuntimeService/Version
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.405443328Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c0533840-16f2-4784-87ad-8bfbc129741b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.405859537Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814582405833192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0533840-16f2-4784-87ad-8bfbc129741b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.406500196Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4610ace-2d04-43f4-8c61-e2c6f2381fe7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.406564994Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4610ace-2d04-43f4-8c61-e2c6f2381fe7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.406595703Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a4610ace-2d04-43f4-8c61-e2c6f2381fe7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.442695027Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=752803d2-0277-4dd8-b489-3c331c1e2363 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.442801010Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=752803d2-0277-4dd8-b489-3c331c1e2363 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.444215276Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b7a5be8f-4f3c-441f-af6e-472d99fa9cff name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.444577950Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814582444553961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7a5be8f-4f3c-441f-af6e-472d99fa9cff name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.445148754Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11a91f8a-2246-433e-bd14-71c7f9dc2bdc name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.445240844Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11a91f8a-2246-433e-bd14-71c7f9dc2bdc name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.445290656Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=11a91f8a-2246-433e-bd14-71c7f9dc2bdc name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.477588776Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da458583-6dc7-4300-bb91-2ac337d1a23e name=/runtime.v1.RuntimeService/Version
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.477677782Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da458583-6dc7-4300-bb91-2ac337d1a23e name=/runtime.v1.RuntimeService/Version
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.478852417Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ddb5fd88-10ec-4a80-9e24-1835bdccbff6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.479265717Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814582479243199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ddb5fd88-10ec-4a80-9e24-1835bdccbff6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.479836577Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99e69c6e-6a81-4dd9-a11e-38a6fb0e427c name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.479895408Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99e69c6e-6a81-4dd9-a11e-38a6fb0e427c name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:29:42 old-k8s-version-359369 crio[632]: time="2024-10-01 20:29:42.479943456Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=99e69c6e-6a81-4dd9-a11e-38a6fb0e427c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 1 20:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.061451] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043514] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.028959] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.047745] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.355137] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.538724] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.065709] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077031] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.174087] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.145035] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.248393] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +6.785134] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.069182] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.078495] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[ +11.012728] kauditd_printk_skb: 46 callbacks suppressed
	[Oct 1 20:25] systemd-fstab-generator[5075]: Ignoring "noauto" option for root device
	[Oct 1 20:27] systemd-fstab-generator[5356]: Ignoring "noauto" option for root device
	[  +0.061063] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:29:42 up 8 min,  0 users,  load average: 0.06, 0.06, 0.01
	Linux old-k8s-version-359369 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 01 20:29:40 old-k8s-version-359369 kubelet[5537]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1/service.go:89 +0x1a5
	Oct 01 20:29:40 old-k8s-version-359369 kubelet[5537]: k8s.io/kubernetes/vendor/k8s.io/client-go/informers/core/v1.NewFilteredServiceInformer.func1(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x48aa087, ...)
	Oct 01 20:29:40 old-k8s-version-359369 kubelet[5537]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/informers/core/v1/service.go:65 +0x1d5
	Oct 01 20:29:40 old-k8s-version-359369 kubelet[5537]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*ListWatch).List(0xc000a0ac40, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Oct 01 20:29:40 old-k8s-version-359369 kubelet[5537]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/listwatch.go:106 +0x78
	Oct 01 20:29:40 old-k8s-version-359369 kubelet[5537]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1.1.2(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x48aa087, ...)
	Oct 01 20:29:40 old-k8s-version-359369 kubelet[5537]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:277 +0x75
	Oct 01 20:29:40 old-k8s-version-359369 kubelet[5537]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/pager.SimplePageFunc.func1(0x4f7fe00, 0xc000122010, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Oct 01 20:29:40 old-k8s-version-359369 kubelet[5537]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/pager/pager.go:40 +0x64
	Oct 01 20:29:40 old-k8s-version-359369 kubelet[5537]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/pager.(*ListPager).List(0xc000fb1e60, 0x4f7fe00, 0xc000122010, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Oct 01 20:29:40 old-k8s-version-359369 kubelet[5537]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/pager/pager.go:91 +0x179
	Oct 01 20:29:40 old-k8s-version-359369 kubelet[5537]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1.1(0xc000c6c840, 0xc00024e0e0, 0xc0009e6090, 0xc000c6a320, 0xc00098a078, 0xc000c6a330, 0xc000c7a7e0)
	Oct 01 20:29:40 old-k8s-version-359369 kubelet[5537]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:302 +0x1a5
	Oct 01 20:29:40 old-k8s-version-359369 kubelet[5537]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1
	Oct 01 20:29:40 old-k8s-version-359369 kubelet[5537]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:268 +0x295
	Oct 01 20:29:40 old-k8s-version-359369 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 01 20:29:40 old-k8s-version-359369 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 01 20:29:41 old-k8s-version-359369 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Oct 01 20:29:41 old-k8s-version-359369 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 01 20:29:41 old-k8s-version-359369 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 01 20:29:41 old-k8s-version-359369 kubelet[5593]: I1001 20:29:41.211517    5593 server.go:416] Version: v1.20.0
	Oct 01 20:29:41 old-k8s-version-359369 kubelet[5593]: I1001 20:29:41.212017    5593 server.go:837] Client rotation is on, will bootstrap in background
	Oct 01 20:29:41 old-k8s-version-359369 kubelet[5593]: I1001 20:29:41.214783    5593 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 01 20:29:41 old-k8s-version-359369 kubelet[5593]: W1001 20:29:41.219444    5593 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 01 20:29:41 old-k8s-version-359369 kubelet[5593]: I1001 20:29:41.219802    5593 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-359369 -n old-k8s-version-359369
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-359369 -n old-k8s-version-359369: exit status 2 (227.466238ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-359369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (755.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-878552 --alsologtostderr -v=3
E1001 20:23:22.098077   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-878552 --alsologtostderr -v=3: exit status 82 (2m0.547770883s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-878552"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 20:22:09.402282   67640 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:22:09.402421   67640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:22:09.402433   67640 out.go:358] Setting ErrFile to fd 2...
	I1001 20:22:09.402439   67640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:22:09.402625   67640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 20:22:09.402867   67640 out.go:352] Setting JSON to false
	I1001 20:22:09.402935   67640 mustload.go:65] Loading cluster: default-k8s-diff-port-878552
	I1001 20:22:09.403266   67640 config.go:182] Loaded profile config "default-k8s-diff-port-878552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:22:09.403331   67640 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/config.json ...
	I1001 20:22:09.403484   67640 mustload.go:65] Loading cluster: default-k8s-diff-port-878552
	I1001 20:22:09.403581   67640 config.go:182] Loaded profile config "default-k8s-diff-port-878552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:22:09.403612   67640 stop.go:39] StopHost: default-k8s-diff-port-878552
	I1001 20:22:09.404012   67640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:22:09.404060   67640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:22:09.420029   67640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39123
	I1001 20:22:09.421200   67640 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:22:09.421851   67640 main.go:141] libmachine: Using API Version  1
	I1001 20:22:09.421876   67640 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:22:09.422222   67640 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:22:09.424532   67640 out.go:177] * Stopping node "default-k8s-diff-port-878552"  ...
	I1001 20:22:09.425727   67640 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1001 20:22:09.425758   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:22:09.426006   67640 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1001 20:22:09.426037   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:22:09.428924   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:22:09.429392   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:20:43 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:22:09.429419   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:22:09.429628   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:22:09.429827   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:22:09.430036   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:22:09.430281   67640 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:22:09.549005   67640 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1001 20:22:09.611829   67640 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1001 20:22:09.673077   67640 main.go:141] libmachine: Stopping "default-k8s-diff-port-878552"...
	I1001 20:22:09.673125   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:22:09.674727   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Stop
	I1001 20:22:09.678305   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 0/120
	I1001 20:22:10.680458   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 1/120
	I1001 20:22:11.681812   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 2/120
	I1001 20:22:12.683531   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 3/120
	I1001 20:22:13.685139   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 4/120
	I1001 20:22:14.687161   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 5/120
	I1001 20:22:15.688975   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 6/120
	I1001 20:22:16.690632   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 7/120
	I1001 20:22:17.692143   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 8/120
	I1001 20:22:18.694227   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 9/120
	I1001 20:22:19.695458   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 10/120
	I1001 20:22:20.696917   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 11/120
	I1001 20:22:21.698915   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 12/120
	I1001 20:22:22.700500   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 13/120
	I1001 20:22:23.702794   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 14/120
	I1001 20:22:24.704853   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 15/120
	I1001 20:22:25.707002   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 16/120
	I1001 20:22:26.708392   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 17/120
	I1001 20:22:27.710152   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 18/120
	I1001 20:22:28.711780   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 19/120
	I1001 20:22:29.714294   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 20/120
	I1001 20:22:30.716679   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 21/120
	I1001 20:22:31.718828   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 22/120
	I1001 20:22:32.720612   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 23/120
	I1001 20:22:33.722994   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 24/120
	I1001 20:22:34.725120   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 25/120
	I1001 20:22:35.726562   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 26/120
	I1001 20:22:36.728247   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 27/120
	I1001 20:22:37.729744   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 28/120
	I1001 20:22:38.731558   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 29/120
	I1001 20:22:39.733266   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 30/120
	I1001 20:22:40.734786   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 31/120
	I1001 20:22:41.736250   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 32/120
	I1001 20:22:42.738488   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 33/120
	I1001 20:22:43.740230   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 34/120
	I1001 20:22:44.742458   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 35/120
	I1001 20:22:45.744186   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 36/120
	I1001 20:22:46.745835   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 37/120
	I1001 20:22:47.747718   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 38/120
	I1001 20:22:48.749179   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 39/120
	I1001 20:22:49.751394   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 40/120
	I1001 20:22:50.752973   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 41/120
	I1001 20:22:51.755001   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 42/120
	I1001 20:22:52.756303   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 43/120
	I1001 20:22:53.757834   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 44/120
	I1001 20:22:54.759985   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 45/120
	I1001 20:22:55.761418   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 46/120
	I1001 20:22:56.763295   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 47/120
	I1001 20:22:57.764620   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 48/120
	I1001 20:22:58.766937   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 49/120
	I1001 20:22:59.769205   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 50/120
	I1001 20:23:00.770765   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 51/120
	I1001 20:23:01.772081   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 52/120
	I1001 20:23:02.773494   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 53/120
	I1001 20:23:03.774962   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 54/120
	I1001 20:23:04.776949   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 55/120
	I1001 20:23:05.778774   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 56/120
	I1001 20:23:06.780403   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 57/120
	I1001 20:23:07.781941   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 58/120
	I1001 20:23:08.783479   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 59/120
	I1001 20:23:09.785745   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 60/120
	I1001 20:23:10.788739   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 61/120
	I1001 20:23:11.790141   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 62/120
	I1001 20:23:12.791763   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 63/120
	I1001 20:23:13.793498   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 64/120
	I1001 20:23:14.795827   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 65/120
	I1001 20:23:15.797581   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 66/120
	I1001 20:23:16.799778   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 67/120
	I1001 20:23:17.802017   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 68/120
	I1001 20:23:18.804617   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 69/120
	I1001 20:23:19.807085   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 70/120
	I1001 20:23:20.808656   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 71/120
	I1001 20:23:21.811450   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 72/120
	I1001 20:23:22.813460   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 73/120
	I1001 20:23:23.815809   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 74/120
	I1001 20:23:24.818092   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 75/120
	I1001 20:23:25.819990   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 76/120
	I1001 20:23:26.821745   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 77/120
	I1001 20:23:27.823461   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 78/120
	I1001 20:23:28.825044   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 79/120
	I1001 20:23:29.827254   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 80/120
	I1001 20:23:30.829661   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 81/120
	I1001 20:23:31.831877   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 82/120
	I1001 20:23:32.833202   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 83/120
	I1001 20:23:33.834908   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 84/120
	I1001 20:23:34.836823   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 85/120
	I1001 20:23:35.838291   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 86/120
	I1001 20:23:36.839633   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 87/120
	I1001 20:23:37.841038   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 88/120
	I1001 20:23:38.842290   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 89/120
	I1001 20:23:39.844815   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 90/120
	I1001 20:23:40.846357   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 91/120
	I1001 20:23:41.848253   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 92/120
	I1001 20:23:42.849718   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 93/120
	I1001 20:23:43.851804   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 94/120
	I1001 20:23:44.854024   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 95/120
	I1001 20:23:45.855595   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 96/120
	I1001 20:23:46.857029   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 97/120
	I1001 20:23:47.859170   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 98/120
	I1001 20:23:48.860654   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 99/120
	I1001 20:23:49.863129   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 100/120
	I1001 20:23:50.864846   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 101/120
	I1001 20:23:51.866134   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 102/120
	I1001 20:23:52.867604   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 103/120
	I1001 20:23:53.868883   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 104/120
	I1001 20:23:54.870782   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 105/120
	I1001 20:23:55.872288   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 106/120
	I1001 20:23:56.873676   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 107/120
	I1001 20:23:57.875363   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 108/120
	I1001 20:23:58.876735   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 109/120
	I1001 20:23:59.879149   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 110/120
	I1001 20:24:00.881514   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 111/120
	I1001 20:24:01.883395   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 112/120
	I1001 20:24:02.885318   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 113/120
	I1001 20:24:03.887035   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 114/120
	I1001 20:24:04.889217   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 115/120
	I1001 20:24:05.890920   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 116/120
	I1001 20:24:06.892687   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 117/120
	I1001 20:24:07.894049   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 118/120
	I1001 20:24:08.895517   67640 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for machine to stop 119/120
	I1001 20:24:09.896827   67640 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1001 20:24:09.896877   67640 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1001 20:24:09.898721   67640 out.go:201] 
	W1001 20:24:09.900205   67640 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1001 20:24:09.900227   67640 out.go:270] * 
	* 
	W1001 20:24:09.902741   67640 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 20:24:09.903950   67640 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-878552 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-878552 -n default-k8s-diff-port-878552
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-878552 -n default-k8s-diff-port-878552: exit status 3 (18.498595902s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 20:24:28.404785   68196 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.4:22: connect: no route to host
	E1001 20:24:28.404814   68196 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.4:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-878552" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-878552 -n default-k8s-diff-port-878552
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-878552 -n default-k8s-diff-port-878552: exit status 3 (3.168253817s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 20:24:31.572771   68291 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.4:22: connect: no route to host
	E1001 20:24:31.572792   68291 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.4:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-878552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-878552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155765927s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.4:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-878552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-878552 -n default-k8s-diff-port-878552
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-878552 -n default-k8s-diff-port-878552: exit status 3 (3.059599786s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 20:24:40.788720   68370 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.4:22: connect: no route to host
	E1001 20:24:40.788744   68370 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.4:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-878552" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1001 20:26:34.839711   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-106982 -n embed-certs-106982
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-01 20:35:29.107057033 +0000 UTC m=+6080.055860437
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-106982 -n embed-certs-106982
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-106982 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-106982 logs -n 25: (1.366067954s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-402897                              | cert-expiration-402897       | jenkins | v1.34.0 | 01 Oct 24 20:12 UTC | 01 Oct 24 20:12 UTC |
	| start   | -p embed-certs-106982                                  | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:12 UTC | 01 Oct 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-262337             | no-preload-262337            | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC | 01 Oct 24 20:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-262337                                   | no-preload-262337            | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-106982            | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC | 01 Oct 24 20:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-106982                                  | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:14 UTC | 01 Oct 24 20:14 UTC |
	| start   | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:14 UTC | 01 Oct 24 20:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC | 01 Oct 24 20:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-359369        | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-262337                  | no-preload-262337            | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-262337                                   | no-preload-262337            | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC | 01 Oct 24 20:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-106982                 | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-556200 | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:16 UTC |
	|         | disable-driver-mounts-556200                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:21 UTC |
	|         | default-k8s-diff-port-878552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-106982                                  | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-359369                              | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:17 UTC | 01 Oct 24 20:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-359369             | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:17 UTC | 01 Oct 24 20:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-359369                              | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-878552  | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:22 UTC | 01 Oct 24 20:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:22 UTC |                     |
	|         | default-k8s-diff-port-878552                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-878552       | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:24 UTC | 01 Oct 24 20:34 UTC |
	|         | default-k8s-diff-port-878552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 20:24:40
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 20:24:40.832961   68418 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:24:40.833061   68418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:24:40.833066   68418 out.go:358] Setting ErrFile to fd 2...
	I1001 20:24:40.833070   68418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:24:40.833265   68418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 20:24:40.833818   68418 out.go:352] Setting JSON to false
	I1001 20:24:40.834796   68418 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7623,"bootTime":1727806658,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 20:24:40.834894   68418 start.go:139] virtualization: kvm guest
	I1001 20:24:40.837148   68418 out.go:177] * [default-k8s-diff-port-878552] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 20:24:40.838511   68418 notify.go:220] Checking for updates...
	I1001 20:24:40.838551   68418 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 20:24:40.839938   68418 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 20:24:40.841161   68418 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:24:40.842268   68418 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:24:40.843373   68418 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 20:24:40.844538   68418 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 20:24:40.846141   68418 config.go:182] Loaded profile config "default-k8s-diff-port-878552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:24:40.846513   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:24:40.846561   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:24:40.862168   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42661
	I1001 20:24:40.862628   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:24:40.863294   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:24:40.863326   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:24:40.863699   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:24:40.863903   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:24:40.864180   68418 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 20:24:40.864548   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:24:40.864620   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:24:40.880173   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45711
	I1001 20:24:40.880719   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:24:40.881220   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:24:40.881245   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:24:40.881581   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:24:40.881795   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:24:40.920802   68418 out.go:177] * Using the kvm2 driver based on existing profile
	I1001 20:24:40.921986   68418 start.go:297] selected driver: kvm2
	I1001 20:24:40.921999   68418 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-878552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-878552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.4 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:24:40.922122   68418 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 20:24:40.922802   68418 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:24:40.922895   68418 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 20:24:40.938386   68418 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 20:24:40.938811   68418 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:24:40.938841   68418 cni.go:84] Creating CNI manager for ""
	I1001 20:24:40.938880   68418 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:24:40.938931   68418 start.go:340] cluster config:
	{Name:default-k8s-diff-port-878552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-878552 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.4 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:24:40.939036   68418 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:24:40.940656   68418 out.go:177] * Starting "default-k8s-diff-port-878552" primary control-plane node in "default-k8s-diff-port-878552" cluster
	I1001 20:24:40.941946   68418 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 20:24:40.942006   68418 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 20:24:40.942023   68418 cache.go:56] Caching tarball of preloaded images
	I1001 20:24:40.942155   68418 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 20:24:40.942166   68418 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 20:24:40.942298   68418 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/config.json ...
	I1001 20:24:40.942537   68418 start.go:360] acquireMachinesLock for default-k8s-diff-port-878552: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 20:24:40.942581   68418 start.go:364] duration metric: took 24.859µs to acquireMachinesLock for "default-k8s-diff-port-878552"
	I1001 20:24:40.942601   68418 start.go:96] Skipping create...Using existing machine configuration
	I1001 20:24:40.942608   68418 fix.go:54] fixHost starting: 
	I1001 20:24:40.942921   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:24:40.942954   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:24:40.958447   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36711
	I1001 20:24:40.958976   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:24:40.960190   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:24:40.960223   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:24:40.960575   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:24:40.960770   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:24:40.960921   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:24:40.962765   68418 fix.go:112] recreateIfNeeded on default-k8s-diff-port-878552: state=Running err=<nil>
	W1001 20:24:40.962786   68418 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 20:24:40.964520   68418 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-878552" VM ...
	I1001 20:24:37.763268   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:40.262669   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:39.025570   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:39.040932   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:39.041011   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:39.076620   65592 cri.go:89] found id: ""
	I1001 20:24:39.076649   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.076659   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:39.076666   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:39.076734   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:39.113395   65592 cri.go:89] found id: ""
	I1001 20:24:39.113422   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.113430   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:39.113436   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:39.113490   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:39.147839   65592 cri.go:89] found id: ""
	I1001 20:24:39.147877   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.147890   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:39.147899   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:39.147966   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:39.179721   65592 cri.go:89] found id: ""
	I1001 20:24:39.179758   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.179769   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:39.179777   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:39.179842   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:39.211511   65592 cri.go:89] found id: ""
	I1001 20:24:39.211541   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.211549   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:39.211554   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:39.211603   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:39.243517   65592 cri.go:89] found id: ""
	I1001 20:24:39.243544   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.243552   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:39.243557   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:39.243623   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:39.276159   65592 cri.go:89] found id: ""
	I1001 20:24:39.276182   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.276189   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:39.276195   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:39.276239   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:39.307242   65592 cri.go:89] found id: ""
	I1001 20:24:39.307274   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.307285   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:39.307295   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:39.307307   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:39.387442   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:39.387486   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:39.423123   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:39.423156   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:39.474648   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:39.474686   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:39.488129   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:39.488158   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:39.557478   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:42.058114   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:42.071979   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:42.072056   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:42.110529   65592 cri.go:89] found id: ""
	I1001 20:24:42.110557   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.110565   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:42.110570   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:42.110619   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:42.145408   65592 cri.go:89] found id: ""
	I1001 20:24:42.145436   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.145445   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:42.145450   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:42.145509   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:42.180602   65592 cri.go:89] found id: ""
	I1001 20:24:42.180641   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.180655   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:42.180664   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:42.180722   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:38.119187   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:40.619080   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:40.965599   68418 machine.go:93] provisionDockerMachine start ...
	I1001 20:24:40.965619   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:24:40.965852   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:24:40.968710   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:24:40.969253   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:20:43 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:24:40.969286   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:24:40.969517   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:24:40.969724   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:24:40.969960   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:24:40.970112   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:24:40.970316   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:24:40.970570   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:24:40.970584   68418 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 20:24:43.860755   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:24:42.262933   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:44.762857   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:42.214116   65592 cri.go:89] found id: ""
	I1001 20:24:42.214148   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.214160   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:42.214168   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:42.214224   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:42.246785   65592 cri.go:89] found id: ""
	I1001 20:24:42.246814   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.246825   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:42.246832   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:42.246900   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:42.281586   65592 cri.go:89] found id: ""
	I1001 20:24:42.281633   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.281645   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:42.281660   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:42.281724   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:42.318982   65592 cri.go:89] found id: ""
	I1001 20:24:42.319015   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.319025   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:42.319032   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:42.319085   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:42.350592   65592 cri.go:89] found id: ""
	I1001 20:24:42.350619   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.350638   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:42.350646   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:42.350659   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:42.429111   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:42.429152   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:42.466741   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:42.466775   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:42.516829   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:42.516870   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:42.530174   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:42.530201   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:42.600444   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:45.101469   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:45.113821   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:45.113904   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:45.148105   65592 cri.go:89] found id: ""
	I1001 20:24:45.148132   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.148146   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:45.148152   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:45.148196   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:45.180980   65592 cri.go:89] found id: ""
	I1001 20:24:45.181012   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.181027   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:45.181046   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:45.181113   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:45.216971   65592 cri.go:89] found id: ""
	I1001 20:24:45.217001   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.217010   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:45.217015   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:45.217060   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:45.252240   65592 cri.go:89] found id: ""
	I1001 20:24:45.252275   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.252287   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:45.252294   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:45.252354   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:45.287389   65592 cri.go:89] found id: ""
	I1001 20:24:45.287419   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.287434   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:45.287440   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:45.287501   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:45.319980   65592 cri.go:89] found id: ""
	I1001 20:24:45.320015   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.320027   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:45.320035   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:45.320101   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:45.351894   65592 cri.go:89] found id: ""
	I1001 20:24:45.351920   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.351931   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:45.351936   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:45.351984   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:45.385370   65592 cri.go:89] found id: ""
	I1001 20:24:45.385400   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.385412   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:45.385423   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:45.385485   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:45.449558   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:45.449584   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:45.449596   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:45.524322   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:45.524372   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:45.560729   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:45.560757   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:45.614098   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:45.614139   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:43.119614   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:45.121666   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:47.618362   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:46.932587   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:24:47.263384   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:49.761472   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:48.129944   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:48.143420   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:48.143496   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:48.175627   65592 cri.go:89] found id: ""
	I1001 20:24:48.175668   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.175682   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:48.175689   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:48.175747   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:48.210422   65592 cri.go:89] found id: ""
	I1001 20:24:48.210451   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.210462   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:48.210470   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:48.210535   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:48.243916   65592 cri.go:89] found id: ""
	I1001 20:24:48.243952   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.243963   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:48.243972   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:48.244027   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:48.275802   65592 cri.go:89] found id: ""
	I1001 20:24:48.275830   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.275845   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:48.275857   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:48.275917   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:48.311539   65592 cri.go:89] found id: ""
	I1001 20:24:48.311569   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.311579   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:48.311586   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:48.311648   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:48.342606   65592 cri.go:89] found id: ""
	I1001 20:24:48.342646   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.342658   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:48.342666   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:48.342718   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:48.375554   65592 cri.go:89] found id: ""
	I1001 20:24:48.375581   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.375591   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:48.375597   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:48.375642   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:48.407747   65592 cri.go:89] found id: ""
	I1001 20:24:48.407776   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.407789   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:48.407800   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:48.407814   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:48.457470   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:48.457503   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:48.470483   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:48.470517   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:48.533536   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:48.533565   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:48.533580   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:48.614530   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:48.614571   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:51.157091   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:51.170292   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:51.170364   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:51.203784   65592 cri.go:89] found id: ""
	I1001 20:24:51.203809   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.203822   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:51.203828   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:51.203917   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:51.239789   65592 cri.go:89] found id: ""
	I1001 20:24:51.239826   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.239834   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:51.239840   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:51.239889   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:51.274562   65592 cri.go:89] found id: ""
	I1001 20:24:51.274595   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.274607   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:51.274617   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:51.274701   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:51.306172   65592 cri.go:89] found id: ""
	I1001 20:24:51.306199   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.306207   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:51.306213   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:51.306269   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:51.339631   65592 cri.go:89] found id: ""
	I1001 20:24:51.339660   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.339668   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:51.339674   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:51.339725   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:51.372128   65592 cri.go:89] found id: ""
	I1001 20:24:51.372154   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.372163   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:51.372169   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:51.372223   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:51.403790   65592 cri.go:89] found id: ""
	I1001 20:24:51.403818   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.403828   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:51.403842   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:51.403890   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:51.437771   65592 cri.go:89] found id: ""
	I1001 20:24:51.437799   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.437808   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:51.437816   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:51.437827   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:51.489824   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:51.489864   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:51.503478   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:51.503508   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:51.573741   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:51.573768   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:51.573780   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:51.662355   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:51.662391   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:49.618685   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:51.619186   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:53.012639   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:24:51.761853   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:53.762442   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:56.261818   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:54.199747   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:54.212731   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:54.212797   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:54.244554   65592 cri.go:89] found id: ""
	I1001 20:24:54.244586   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.244596   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:54.244602   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:54.244652   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:54.280636   65592 cri.go:89] found id: ""
	I1001 20:24:54.280667   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.280679   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:54.280686   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:54.280737   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:54.318213   65592 cri.go:89] found id: ""
	I1001 20:24:54.318246   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.318257   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:54.318265   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:54.318321   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:54.353563   65592 cri.go:89] found id: ""
	I1001 20:24:54.353595   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.353606   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:54.353615   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:54.353678   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:54.387770   65592 cri.go:89] found id: ""
	I1001 20:24:54.387795   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.387803   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:54.387809   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:54.387869   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:54.421289   65592 cri.go:89] found id: ""
	I1001 20:24:54.421317   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.421325   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:54.421332   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:54.421382   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:54.456221   65592 cri.go:89] found id: ""
	I1001 20:24:54.456261   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.456274   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:54.456282   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:54.456348   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:54.488174   65592 cri.go:89] found id: ""
	I1001 20:24:54.488208   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.488219   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:54.488228   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:54.488241   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:54.540981   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:54.541020   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:54.554099   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:54.554129   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:54.623978   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:54.624013   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:54.624034   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:54.704703   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:54.704738   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:54.119129   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:56.619282   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:56.088698   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:24:58.262173   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:00.761865   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:57.241791   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:57.254771   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:57.254843   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:57.290226   65592 cri.go:89] found id: ""
	I1001 20:24:57.290263   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.290271   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:57.290277   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:57.290336   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:57.324910   65592 cri.go:89] found id: ""
	I1001 20:24:57.324938   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.324946   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:57.324951   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:57.325068   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:57.360553   65592 cri.go:89] found id: ""
	I1001 20:24:57.360586   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.360601   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:57.360608   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:57.360669   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:57.395182   65592 cri.go:89] found id: ""
	I1001 20:24:57.395216   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.395229   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:57.395236   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:57.395296   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:57.428967   65592 cri.go:89] found id: ""
	I1001 20:24:57.428998   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.429011   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:57.429017   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:57.429072   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:57.462483   65592 cri.go:89] found id: ""
	I1001 20:24:57.462511   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.462519   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:57.462525   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:57.462581   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:57.495505   65592 cri.go:89] found id: ""
	I1001 20:24:57.495538   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.495550   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:57.495556   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:57.495615   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:57.528132   65592 cri.go:89] found id: ""
	I1001 20:24:57.528164   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.528176   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:57.528188   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:57.528203   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:57.596557   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:57.596583   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:57.596598   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:57.676797   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:57.676830   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:57.714624   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:57.714653   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:57.763801   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:57.763839   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:00.277808   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:00.291432   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:00.291489   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:00.327524   65592 cri.go:89] found id: ""
	I1001 20:25:00.327554   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.327562   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:00.327568   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:00.327618   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:00.364125   65592 cri.go:89] found id: ""
	I1001 20:25:00.364153   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.364162   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:00.364167   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:00.364229   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:00.404507   65592 cri.go:89] found id: ""
	I1001 20:25:00.404543   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.404555   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:00.404564   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:00.404770   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:00.438761   65592 cri.go:89] found id: ""
	I1001 20:25:00.438792   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.438800   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:00.438807   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:00.438862   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:00.473263   65592 cri.go:89] found id: ""
	I1001 20:25:00.473301   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.473313   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:00.473321   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:00.473391   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:00.510276   65592 cri.go:89] found id: ""
	I1001 20:25:00.510307   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.510317   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:00.510324   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:00.510383   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:00.545118   65592 cri.go:89] found id: ""
	I1001 20:25:00.545149   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.545165   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:00.545173   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:00.545229   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:00.577773   65592 cri.go:89] found id: ""
	I1001 20:25:00.577799   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.577810   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:00.577821   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:00.577835   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:00.628978   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:00.629012   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:00.642192   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:00.642225   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:00.711399   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:00.711432   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:00.711446   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:00.792477   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:00.792514   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:59.118041   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:01.119565   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:02.164636   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:05.236638   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:02.762323   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:04.764910   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:03.332492   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:03.347542   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:03.347622   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:03.388263   65592 cri.go:89] found id: ""
	I1001 20:25:03.388292   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.388300   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:03.388306   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:03.388353   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:03.421489   65592 cri.go:89] found id: ""
	I1001 20:25:03.421525   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.421534   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:03.421539   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:03.421634   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:03.457139   65592 cri.go:89] found id: ""
	I1001 20:25:03.457172   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.457182   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:03.457189   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:03.457251   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:03.497203   65592 cri.go:89] found id: ""
	I1001 20:25:03.497232   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.497241   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:03.497247   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:03.497313   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:03.535137   65592 cri.go:89] found id: ""
	I1001 20:25:03.535163   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.535171   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:03.535176   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:03.535221   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:03.569131   65592 cri.go:89] found id: ""
	I1001 20:25:03.569158   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.569166   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:03.569171   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:03.569217   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:03.605289   65592 cri.go:89] found id: ""
	I1001 20:25:03.605321   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.605329   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:03.605336   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:03.605389   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:03.651086   65592 cri.go:89] found id: ""
	I1001 20:25:03.651115   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.651123   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:03.651134   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:03.651145   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:03.731256   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:03.731281   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:03.731299   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:03.809393   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:03.809442   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:03.849171   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:03.849198   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:03.898009   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:03.898045   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:06.411962   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:06.425432   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:06.425513   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:06.463339   65592 cri.go:89] found id: ""
	I1001 20:25:06.463371   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.463383   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:06.463391   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:06.463455   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:06.502527   65592 cri.go:89] found id: ""
	I1001 20:25:06.502561   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.502569   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:06.502611   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:06.502687   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:06.547428   65592 cri.go:89] found id: ""
	I1001 20:25:06.547465   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.547474   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:06.547480   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:06.547539   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:06.581672   65592 cri.go:89] found id: ""
	I1001 20:25:06.581699   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.581708   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:06.581713   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:06.581769   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:06.615391   65592 cri.go:89] found id: ""
	I1001 20:25:06.615436   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.615449   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:06.615457   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:06.615525   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:06.651019   65592 cri.go:89] found id: ""
	I1001 20:25:06.651050   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.651060   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:06.651067   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:06.651142   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:06.687887   65592 cri.go:89] found id: ""
	I1001 20:25:06.687912   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.687922   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:06.687929   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:06.687982   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:06.729234   65592 cri.go:89] found id: ""
	I1001 20:25:06.729263   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.729273   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:06.729282   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:06.729296   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:06.747295   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:06.747326   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:06.816480   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:06.816511   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:06.816524   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:06.896918   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:06.896957   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:06.938922   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:06.938958   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:03.619205   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:06.118575   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:06.765214   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:09.261806   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:11.262162   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:09.494252   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:09.508085   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:09.508171   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:09.542999   65592 cri.go:89] found id: ""
	I1001 20:25:09.543029   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.543037   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:09.543043   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:09.543100   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:09.578112   65592 cri.go:89] found id: ""
	I1001 20:25:09.578137   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.578145   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:09.578150   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:09.578199   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:09.613123   65592 cri.go:89] found id: ""
	I1001 20:25:09.613150   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.613158   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:09.613166   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:09.613223   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:09.648172   65592 cri.go:89] found id: ""
	I1001 20:25:09.648214   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.648223   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:09.648230   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:09.648302   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:09.681217   65592 cri.go:89] found id: ""
	I1001 20:25:09.681244   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.681254   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:09.681261   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:09.681320   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:09.718166   65592 cri.go:89] found id: ""
	I1001 20:25:09.718196   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.718204   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:09.718212   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:09.718272   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:09.751910   65592 cri.go:89] found id: ""
	I1001 20:25:09.751942   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.751951   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:09.751956   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:09.752004   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:09.789213   65592 cri.go:89] found id: ""
	I1001 20:25:09.789237   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.789246   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:09.789254   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:09.789265   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:09.826746   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:09.826780   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:09.879079   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:09.879123   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:09.892480   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:09.892507   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:09.967048   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:09.967084   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:09.967103   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:08.118822   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:10.120018   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:12.620582   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:14.356624   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:13.262286   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:15.263349   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:12.545057   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:12.557888   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:12.557969   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:12.594881   65592 cri.go:89] found id: ""
	I1001 20:25:12.594928   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.594942   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:12.594952   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:12.595021   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:12.631393   65592 cri.go:89] found id: ""
	I1001 20:25:12.631425   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.631437   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:12.631445   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:12.631504   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:12.666442   65592 cri.go:89] found id: ""
	I1001 20:25:12.666476   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.666486   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:12.666493   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:12.666548   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:12.703321   65592 cri.go:89] found id: ""
	I1001 20:25:12.703359   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.703371   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:12.703379   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:12.703444   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:12.742188   65592 cri.go:89] found id: ""
	I1001 20:25:12.742216   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.742224   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:12.742230   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:12.742276   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:12.781829   65592 cri.go:89] found id: ""
	I1001 20:25:12.781859   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.781869   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:12.781876   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:12.781940   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:12.815368   65592 cri.go:89] found id: ""
	I1001 20:25:12.815397   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.815405   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:12.815411   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:12.815463   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:12.850913   65592 cri.go:89] found id: ""
	I1001 20:25:12.850941   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.850949   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:12.850958   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:12.850968   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:12.901409   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:12.901443   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:12.914517   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:12.914567   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:12.980086   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:12.980119   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:12.980135   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:13.055950   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:13.055989   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:15.595692   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:15.609648   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:15.609728   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:15.645477   65592 cri.go:89] found id: ""
	I1001 20:25:15.645502   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.645510   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:15.645514   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:15.645558   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:15.679674   65592 cri.go:89] found id: ""
	I1001 20:25:15.679702   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.679711   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:15.679717   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:15.679774   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:15.718057   65592 cri.go:89] found id: ""
	I1001 20:25:15.718082   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.718092   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:15.718097   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:15.718153   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:15.754094   65592 cri.go:89] found id: ""
	I1001 20:25:15.754121   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.754130   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:15.754136   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:15.754189   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:15.790415   65592 cri.go:89] found id: ""
	I1001 20:25:15.790450   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.790464   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:15.790472   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:15.790535   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:15.825603   65592 cri.go:89] found id: ""
	I1001 20:25:15.825630   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.825645   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:15.825653   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:15.825717   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:15.861330   65592 cri.go:89] found id: ""
	I1001 20:25:15.861356   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.861368   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:15.861375   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:15.861451   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:15.897534   65592 cri.go:89] found id: ""
	I1001 20:25:15.897564   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.897575   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:15.897584   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:15.897598   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:15.972842   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:15.972881   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:16.010625   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:16.010653   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:16.062717   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:16.062762   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:16.076538   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:16.076568   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:16.156886   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:15.118878   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:17.119791   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:17.428649   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:17.764089   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:20.261752   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:18.657436   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:18.673018   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:18.673093   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:18.708040   65592 cri.go:89] found id: ""
	I1001 20:25:18.708078   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.708091   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:18.708100   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:18.708167   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:18.740152   65592 cri.go:89] found id: ""
	I1001 20:25:18.740188   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.740200   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:18.740207   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:18.740264   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:18.778238   65592 cri.go:89] found id: ""
	I1001 20:25:18.778270   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.778279   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:18.778287   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:18.778351   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:18.815450   65592 cri.go:89] found id: ""
	I1001 20:25:18.815489   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.815503   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:18.815512   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:18.815576   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:18.850008   65592 cri.go:89] found id: ""
	I1001 20:25:18.850038   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.850047   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:18.850053   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:18.850104   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:18.890919   65592 cri.go:89] found id: ""
	I1001 20:25:18.890943   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.890951   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:18.890957   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:18.891004   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:18.934196   65592 cri.go:89] found id: ""
	I1001 20:25:18.934228   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.934240   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:18.934247   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:18.934307   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:18.977817   65592 cri.go:89] found id: ""
	I1001 20:25:18.977850   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.977862   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:18.977875   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:18.977889   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:19.039867   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:19.039910   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:19.054277   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:19.054310   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:19.125736   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:19.125765   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:19.125782   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:19.208588   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:19.208622   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:21.750881   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:21.766638   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:21.766712   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:21.801906   65592 cri.go:89] found id: ""
	I1001 20:25:21.801930   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.801938   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:21.801944   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:21.801990   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:21.842801   65592 cri.go:89] found id: ""
	I1001 20:25:21.842830   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.842844   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:21.842852   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:21.842917   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:21.876550   65592 cri.go:89] found id: ""
	I1001 20:25:21.876577   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.876588   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:21.876594   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:21.876647   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:21.910972   65592 cri.go:89] found id: ""
	I1001 20:25:21.911007   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.911016   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:21.911022   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:21.911098   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:21.945721   65592 cri.go:89] found id: ""
	I1001 20:25:21.945753   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.945765   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:21.945773   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:21.945833   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:21.982101   65592 cri.go:89] found id: ""
	I1001 20:25:21.982131   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.982143   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:21.982151   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:21.982242   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:22.016526   65592 cri.go:89] found id: ""
	I1001 20:25:22.016558   65592 logs.go:276] 0 containers: []
	W1001 20:25:22.016569   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:22.016577   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:22.016632   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:22.054792   65592 cri.go:89] found id: ""
	I1001 20:25:22.054822   65592 logs.go:276] 0 containers: []
	W1001 20:25:22.054833   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:22.054844   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:22.054863   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:22.105936   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:22.105974   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:22.120834   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:22.120858   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:22.195177   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:22.195211   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:22.195228   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:19.120304   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:21.618511   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:23.512698   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:22.264134   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:24.762355   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:22.281244   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:22.281285   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:24.824197   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:24.840967   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:24.841030   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:24.882399   65592 cri.go:89] found id: ""
	I1001 20:25:24.882429   65592 logs.go:276] 0 containers: []
	W1001 20:25:24.882443   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:24.882449   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:24.882497   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:24.935548   65592 cri.go:89] found id: ""
	I1001 20:25:24.935581   65592 logs.go:276] 0 containers: []
	W1001 20:25:24.935590   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:24.935596   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:24.935644   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:24.976931   65592 cri.go:89] found id: ""
	I1001 20:25:24.976958   65592 logs.go:276] 0 containers: []
	W1001 20:25:24.976969   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:24.976976   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:24.977035   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:25.009926   65592 cri.go:89] found id: ""
	I1001 20:25:25.009959   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.009968   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:25.009975   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:25.010039   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:25.043261   65592 cri.go:89] found id: ""
	I1001 20:25:25.043299   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.043310   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:25.043316   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:25.043377   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:25.075177   65592 cri.go:89] found id: ""
	I1001 20:25:25.075205   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.075214   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:25.075221   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:25.075267   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:25.109792   65592 cri.go:89] found id: ""
	I1001 20:25:25.109832   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.109845   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:25.109871   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:25.109942   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:25.148721   65592 cri.go:89] found id: ""
	I1001 20:25:25.148753   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.148763   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:25.148772   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:25.148790   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:25.161802   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:25.161841   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:25.227699   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:25.227732   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:25.227750   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:25.314028   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:25.314075   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:25.354881   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:25.354919   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:23.618792   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:26.118493   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:26.580628   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:27.262584   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:29.761866   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:27.906936   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:27.920745   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:27.920806   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:27.955399   65592 cri.go:89] found id: ""
	I1001 20:25:27.955426   65592 logs.go:276] 0 containers: []
	W1001 20:25:27.955444   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:27.955450   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:27.955503   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:27.993714   65592 cri.go:89] found id: ""
	I1001 20:25:27.993747   65592 logs.go:276] 0 containers: []
	W1001 20:25:27.993759   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:27.993766   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:27.993827   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:28.028439   65592 cri.go:89] found id: ""
	I1001 20:25:28.028475   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.028487   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:28.028494   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:28.028563   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:28.072935   65592 cri.go:89] found id: ""
	I1001 20:25:28.072966   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.072977   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:28.072985   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:28.073050   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:28.107241   65592 cri.go:89] found id: ""
	I1001 20:25:28.107275   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.107285   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:28.107293   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:28.107357   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:28.141382   65592 cri.go:89] found id: ""
	I1001 20:25:28.141412   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.141423   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:28.141431   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:28.141494   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:28.175749   65592 cri.go:89] found id: ""
	I1001 20:25:28.175782   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.175794   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:28.175801   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:28.175864   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:28.214968   65592 cri.go:89] found id: ""
	I1001 20:25:28.214997   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.215006   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:28.215015   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:28.215027   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:28.259588   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:28.259619   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:28.314439   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:28.314480   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:28.327938   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:28.327967   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:28.399479   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:28.399508   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:28.399523   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:30.978863   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:30.991415   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:30.991493   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:31.026443   65592 cri.go:89] found id: ""
	I1001 20:25:31.026480   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.026494   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:31.026513   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:31.026576   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:31.060635   65592 cri.go:89] found id: ""
	I1001 20:25:31.060663   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.060678   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:31.060684   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:31.060743   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:31.095494   65592 cri.go:89] found id: ""
	I1001 20:25:31.095525   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.095533   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:31.095540   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:31.095587   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:31.130693   65592 cri.go:89] found id: ""
	I1001 20:25:31.130718   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.130728   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:31.130741   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:31.130802   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:31.167928   65592 cri.go:89] found id: ""
	I1001 20:25:31.167960   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.167973   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:31.167980   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:31.168033   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:31.202813   65592 cri.go:89] found id: ""
	I1001 20:25:31.202843   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.202855   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:31.202864   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:31.202925   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:31.240424   65592 cri.go:89] found id: ""
	I1001 20:25:31.240459   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.240468   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:31.240474   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:31.240521   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:31.275470   65592 cri.go:89] found id: ""
	I1001 20:25:31.275502   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.275510   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:31.275518   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:31.275529   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:31.329604   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:31.329642   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:31.342695   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:31.342724   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:31.410169   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:31.410275   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:31.410303   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:31.489630   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:31.489677   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:28.118608   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:30.118718   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:32.119227   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:32.660640   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:35.732653   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:31.762062   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:33.764597   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:36.263251   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:34.027406   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:34.039902   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:34.039975   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:34.074992   65592 cri.go:89] found id: ""
	I1001 20:25:34.075025   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.075038   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:34.075045   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:34.075106   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:34.110264   65592 cri.go:89] found id: ""
	I1001 20:25:34.110293   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.110304   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:34.110311   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:34.110371   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:34.147097   65592 cri.go:89] found id: ""
	I1001 20:25:34.147132   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.147143   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:34.147151   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:34.147208   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:34.179453   65592 cri.go:89] found id: ""
	I1001 20:25:34.179481   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.179491   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:34.179500   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:34.179554   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:34.212407   65592 cri.go:89] found id: ""
	I1001 20:25:34.212433   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.212442   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:34.212449   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:34.212495   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:34.244400   65592 cri.go:89] found id: ""
	I1001 20:25:34.244429   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.244440   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:34.244447   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:34.244510   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:34.278423   65592 cri.go:89] found id: ""
	I1001 20:25:34.278448   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.278458   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:34.278464   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:34.278520   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:34.311019   65592 cri.go:89] found id: ""
	I1001 20:25:34.311049   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.311059   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:34.311072   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:34.311083   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:34.347521   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:34.347549   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:34.400717   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:34.400754   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:34.414550   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:34.414576   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:34.486478   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:34.486503   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:34.486519   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:37.071687   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:37.084941   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:37.085025   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:37.119834   65592 cri.go:89] found id: ""
	I1001 20:25:37.119862   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.119870   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:37.119875   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:37.119984   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:37.154795   65592 cri.go:89] found id: ""
	I1001 20:25:37.154832   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.154851   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:37.154867   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:37.154927   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:37.191552   65592 cri.go:89] found id: ""
	I1001 20:25:37.191581   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.191592   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:37.191599   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:37.191670   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:34.119370   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:36.119698   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:38.761540   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:40.762894   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:37.228883   65592 cri.go:89] found id: ""
	I1001 20:25:37.228918   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.228928   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:37.228936   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:37.229000   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:37.263533   65592 cri.go:89] found id: ""
	I1001 20:25:37.263558   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.263568   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:37.263577   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:37.263638   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:37.297367   65592 cri.go:89] found id: ""
	I1001 20:25:37.297401   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.297414   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:37.297422   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:37.297486   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:37.331091   65592 cri.go:89] found id: ""
	I1001 20:25:37.331121   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.331129   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:37.331135   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:37.331202   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:37.364861   65592 cri.go:89] found id: ""
	I1001 20:25:37.364889   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.364897   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:37.364905   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:37.364916   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:37.417507   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:37.417545   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:37.431613   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:37.431646   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:37.497821   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:37.497846   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:37.497861   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:37.578951   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:37.578996   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:40.121350   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:40.134553   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:40.134634   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:40.169277   65592 cri.go:89] found id: ""
	I1001 20:25:40.169313   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.169325   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:40.169333   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:40.169399   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:40.204111   65592 cri.go:89] found id: ""
	I1001 20:25:40.204144   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.204153   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:40.204159   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:40.204206   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:40.237841   65592 cri.go:89] found id: ""
	I1001 20:25:40.237872   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.237880   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:40.237886   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:40.237942   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:40.273081   65592 cri.go:89] found id: ""
	I1001 20:25:40.273108   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.273117   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:40.273123   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:40.273186   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:40.307351   65592 cri.go:89] found id: ""
	I1001 20:25:40.307384   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.307394   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:40.307399   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:40.307462   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:40.340543   65592 cri.go:89] found id: ""
	I1001 20:25:40.340569   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.340578   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:40.340584   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:40.340655   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:40.376070   65592 cri.go:89] found id: ""
	I1001 20:25:40.376112   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.376123   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:40.376130   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:40.376194   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:40.410236   65592 cri.go:89] found id: ""
	I1001 20:25:40.410267   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.410279   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:40.410289   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:40.410300   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:40.463799   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:40.463835   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:40.478403   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:40.478436   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:40.547250   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:40.547279   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:40.547291   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:40.630061   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:40.630098   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:38.617891   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:40.618430   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:41.612771   65263 pod_ready.go:82] duration metric: took 4m0.000338317s for pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace to be "Ready" ...
	E1001 20:25:41.612803   65263 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace to be "Ready" (will not retry!)
	I1001 20:25:41.612832   65263 pod_ready.go:39] duration metric: took 4m13.169141642s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:25:41.612859   65263 kubeadm.go:597] duration metric: took 4m21.203039001s to restartPrimaryControlPlane
	W1001 20:25:41.612919   65263 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1001 20:25:41.612944   65263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 20:25:41.812689   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:44.884661   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:43.264334   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:45.762034   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:43.170764   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:43.183046   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:43.183124   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:43.222995   65592 cri.go:89] found id: ""
	I1001 20:25:43.223029   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.223038   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:43.223044   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:43.223105   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:43.256861   65592 cri.go:89] found id: ""
	I1001 20:25:43.256891   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.256902   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:43.256910   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:43.257002   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:43.292643   65592 cri.go:89] found id: ""
	I1001 20:25:43.292687   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.292698   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:43.292704   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:43.292754   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:43.326539   65592 cri.go:89] found id: ""
	I1001 20:25:43.326568   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.326576   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:43.326582   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:43.326628   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:43.359787   65592 cri.go:89] found id: ""
	I1001 20:25:43.359813   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.359822   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:43.359828   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:43.359890   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:43.392045   65592 cri.go:89] found id: ""
	I1001 20:25:43.392076   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.392086   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:43.392092   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:43.392145   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:43.429498   65592 cri.go:89] found id: ""
	I1001 20:25:43.429529   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.429538   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:43.429544   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:43.429591   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:43.462728   65592 cri.go:89] found id: ""
	I1001 20:25:43.462760   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.462771   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:43.462781   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:43.462798   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:43.512683   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:43.512717   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:43.527253   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:43.527285   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:43.598963   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:43.598989   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:43.599003   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:43.679743   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:43.679790   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:46.217101   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:46.230349   65592 kubeadm.go:597] duration metric: took 4m1.895228035s to restartPrimaryControlPlane
	W1001 20:25:46.230421   65592 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1001 20:25:46.230450   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 20:25:47.762241   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:49.763115   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:47.271291   65592 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.040818559s)
	I1001 20:25:47.271362   65592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:25:47.285083   65592 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:25:47.295774   65592 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:25:47.305487   65592 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:25:47.305511   65592 kubeadm.go:157] found existing configuration files:
	
	I1001 20:25:47.305568   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:25:47.314488   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:25:47.314573   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:25:47.323852   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:25:47.332496   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:25:47.332553   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:25:47.341236   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:25:47.349932   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:25:47.350002   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:25:47.359345   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:25:47.369180   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:25:47.369233   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:25:47.378232   65592 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:25:47.595501   65592 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:25:50.964640   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:54.036635   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:52.261890   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:54.761886   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:00.116640   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:57.261837   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:59.262445   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:01.262529   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:03.188675   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:03.762361   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:06.261749   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:07.708438   65263 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.095470945s)
	I1001 20:26:07.708514   65263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:26:07.722982   65263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:26:07.732118   65263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:26:07.741172   65263 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:26:07.741198   65263 kubeadm.go:157] found existing configuration files:
	
	I1001 20:26:07.741244   65263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:26:07.749683   65263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:26:07.749744   65263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:26:07.758875   65263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:26:07.767668   65263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:26:07.767739   65263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:26:07.776648   65263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:26:07.785930   65263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:26:07.785982   65263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:26:07.794739   65263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:26:07.803180   65263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:26:07.803241   65263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:26:07.812178   65263 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:26:07.851817   65263 kubeadm.go:310] W1001 20:26:07.836874    2546 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:26:07.852402   65263 kubeadm.go:310] W1001 20:26:07.837670    2546 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:26:09.272541   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:08.761247   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:10.761797   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:07.957551   65263 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:26:12.344653   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:16.385918   65263 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 20:26:16.385979   65263 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:26:16.386062   65263 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:26:16.386172   65263 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:26:16.386297   65263 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 20:26:16.386400   65263 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:26:16.387827   65263 out.go:235]   - Generating certificates and keys ...
	I1001 20:26:16.387909   65263 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:26:16.387989   65263 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:26:16.388104   65263 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 20:26:16.388191   65263 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 20:26:16.388284   65263 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 20:26:16.388370   65263 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 20:26:16.388464   65263 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 20:26:16.388545   65263 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 20:26:16.388646   65263 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 20:26:16.388775   65263 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 20:26:16.388824   65263 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 20:26:16.388908   65263 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:26:16.388956   65263 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:26:16.389006   65263 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 20:26:16.389048   65263 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:26:16.389117   65263 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:26:16.389201   65263 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:26:16.389333   65263 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:26:16.389444   65263 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:26:16.390823   65263 out.go:235]   - Booting up control plane ...
	I1001 20:26:16.390917   65263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:26:16.390992   65263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:26:16.391061   65263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:26:16.391161   65263 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:26:16.391285   65263 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:26:16.391335   65263 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:26:16.391468   65263 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 20:26:16.391572   65263 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 20:26:16.391628   65263 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.349149ms
	I1001 20:26:16.391686   65263 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 20:26:16.391736   65263 kubeadm.go:310] [api-check] The API server is healthy after 5.002046172s
	I1001 20:26:16.391818   65263 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 20:26:16.391923   65263 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 20:26:16.391999   65263 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 20:26:16.392169   65263 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-106982 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 20:26:16.392225   65263 kubeadm.go:310] [bootstrap-token] Using token: xlxn2k.owwnzt3amr4nx0st
	I1001 20:26:16.393437   65263 out.go:235]   - Configuring RBAC rules ...
	I1001 20:26:16.393539   65263 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 20:26:16.393609   65263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 20:26:16.393722   65263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 20:26:16.393834   65263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 20:26:16.393940   65263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 20:26:16.394017   65263 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 20:26:16.394117   65263 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 20:26:16.394154   65263 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 20:26:16.394195   65263 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 20:26:16.394200   65263 kubeadm.go:310] 
	I1001 20:26:16.394259   65263 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 20:26:16.394269   65263 kubeadm.go:310] 
	I1001 20:26:16.394335   65263 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 20:26:16.394341   65263 kubeadm.go:310] 
	I1001 20:26:16.394363   65263 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 20:26:16.394440   65263 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 20:26:16.394496   65263 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 20:26:16.394502   65263 kubeadm.go:310] 
	I1001 20:26:16.394553   65263 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 20:26:16.394559   65263 kubeadm.go:310] 
	I1001 20:26:16.394601   65263 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 20:26:16.394611   65263 kubeadm.go:310] 
	I1001 20:26:16.394656   65263 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 20:26:16.394720   65263 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 20:26:16.394804   65263 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 20:26:16.394814   65263 kubeadm.go:310] 
	I1001 20:26:16.394901   65263 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 20:26:16.394996   65263 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 20:26:16.395010   65263 kubeadm.go:310] 
	I1001 20:26:16.395128   65263 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xlxn2k.owwnzt3amr4nx0st \
	I1001 20:26:16.395262   65263 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 \
	I1001 20:26:16.395299   65263 kubeadm.go:310] 	--control-plane 
	I1001 20:26:16.395308   65263 kubeadm.go:310] 
	I1001 20:26:16.395426   65263 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 20:26:16.395436   65263 kubeadm.go:310] 
	I1001 20:26:16.395548   65263 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xlxn2k.owwnzt3amr4nx0st \
	I1001 20:26:16.395648   65263 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 
	I1001 20:26:16.395658   65263 cni.go:84] Creating CNI manager for ""
	I1001 20:26:16.395665   65263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:26:16.396852   65263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 20:26:12.763435   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:15.262381   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:16.398081   65263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 20:26:16.407920   65263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 20:26:16.428213   65263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 20:26:16.428312   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:16.428344   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-106982 minikube.k8s.io/updated_at=2024_10_01T20_26_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=embed-certs-106982 minikube.k8s.io/primary=true
	I1001 20:26:16.667876   65263 ops.go:34] apiserver oom_adj: -16
	I1001 20:26:16.667891   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:17.168194   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:17.668772   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:18.168815   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:18.668087   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:19.168767   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:19.668624   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:20.167974   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:20.668002   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:20.758486   65263 kubeadm.go:1113] duration metric: took 4.330238814s to wait for elevateKubeSystemPrivileges
	I1001 20:26:20.758520   65263 kubeadm.go:394] duration metric: took 5m0.403602376s to StartCluster
	I1001 20:26:20.758539   65263 settings.go:142] acquiring lock: {Name:mkeb3fe64ed992373f048aef24eb0d675bcab60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:26:20.758613   65263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:26:20.760430   65263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/kubeconfig: {Name:mk7ef19d546928001abf2478a3abe1d17765a591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:26:20.760678   65263 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 20:26:20.760746   65263 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 20:26:20.760852   65263 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-106982"
	I1001 20:26:20.760881   65263 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-106982"
	I1001 20:26:20.760877   65263 addons.go:69] Setting default-storageclass=true in profile "embed-certs-106982"
	W1001 20:26:20.760893   65263 addons.go:243] addon storage-provisioner should already be in state true
	I1001 20:26:20.760891   65263 addons.go:69] Setting metrics-server=true in profile "embed-certs-106982"
	I1001 20:26:20.760926   65263 host.go:66] Checking if "embed-certs-106982" exists ...
	I1001 20:26:20.760926   65263 addons.go:234] Setting addon metrics-server=true in "embed-certs-106982"
	W1001 20:26:20.761009   65263 addons.go:243] addon metrics-server should already be in state true
	I1001 20:26:20.761041   65263 host.go:66] Checking if "embed-certs-106982" exists ...
	I1001 20:26:20.760906   65263 config.go:182] Loaded profile config "embed-certs-106982": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:26:20.760902   65263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-106982"
	I1001 20:26:20.761374   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.761426   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.761429   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.761468   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.761545   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.761591   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.762861   65263 out.go:177] * Verifying Kubernetes components...
	I1001 20:26:20.764393   65263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:26:20.778448   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38619
	I1001 20:26:20.779031   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.779198   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39861
	I1001 20:26:20.779632   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.779657   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.779822   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.780085   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.780331   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.780352   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.780789   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.780829   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.781030   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.781240   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetState
	I1001 20:26:20.781260   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37397
	I1001 20:26:20.781672   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.782168   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.782189   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.782587   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.783037   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.783073   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.784573   65263 addons.go:234] Setting addon default-storageclass=true in "embed-certs-106982"
	W1001 20:26:20.784589   65263 addons.go:243] addon default-storageclass should already be in state true
	I1001 20:26:20.784609   65263 host.go:66] Checking if "embed-certs-106982" exists ...
	I1001 20:26:20.784877   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.784912   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.797787   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37113
	I1001 20:26:20.797864   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33201
	I1001 20:26:20.798261   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.798311   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.798836   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.798855   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.798931   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.798951   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.799226   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I1001 20:26:20.799230   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.799367   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.799409   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetState
	I1001 20:26:20.799515   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetState
	I1001 20:26:20.799695   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.800114   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.800130   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.800602   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.801316   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.801331   65263 main.go:141] libmachine: (embed-certs-106982) Calling .DriverName
	I1001 20:26:20.801351   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.801391   65263 main.go:141] libmachine: (embed-certs-106982) Calling .DriverName
	I1001 20:26:20.803237   65263 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1001 20:26:20.803241   65263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:26:18.420597   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:17.762869   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:20.262479   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:20.804378   65263 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1001 20:26:20.804394   65263 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1001 20:26:20.804411   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHHostname
	I1001 20:26:20.804571   65263 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:26:20.804586   65263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 20:26:20.804603   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHHostname
	I1001 20:26:20.808458   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.808866   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.808906   65263 main.go:141] libmachine: (embed-certs-106982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:ab:67", ip: ""} in network mk-embed-certs-106982: {Iface:virbr1 ExpiryTime:2024-10-01 21:21:05 +0000 UTC Type:0 Mac:52:54:00:7f:ab:67 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:embed-certs-106982 Clientid:01:52:54:00:7f:ab:67}
	I1001 20:26:20.808923   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined IP address 192.168.39.203 and MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.809183   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHPort
	I1001 20:26:20.809326   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHKeyPath
	I1001 20:26:20.809462   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHUsername
	I1001 20:26:20.809582   65263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/embed-certs-106982/id_rsa Username:docker}
	I1001 20:26:20.809917   65263 main.go:141] libmachine: (embed-certs-106982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:ab:67", ip: ""} in network mk-embed-certs-106982: {Iface:virbr1 ExpiryTime:2024-10-01 21:21:05 +0000 UTC Type:0 Mac:52:54:00:7f:ab:67 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:embed-certs-106982 Clientid:01:52:54:00:7f:ab:67}
	I1001 20:26:20.809941   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined IP address 192.168.39.203 and MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.809975   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHPort
	I1001 20:26:20.810172   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHKeyPath
	I1001 20:26:20.810320   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHUsername
	I1001 20:26:20.810498   65263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/embed-certs-106982/id_rsa Username:docker}
	I1001 20:26:20.818676   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33699
	I1001 20:26:20.819066   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.819574   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.819596   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.819900   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.820110   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetState
	I1001 20:26:20.821633   65263 main.go:141] libmachine: (embed-certs-106982) Calling .DriverName
	I1001 20:26:20.821820   65263 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 20:26:20.821834   65263 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 20:26:20.821852   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHHostname
	I1001 20:26:20.824684   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.825165   65263 main.go:141] libmachine: (embed-certs-106982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:ab:67", ip: ""} in network mk-embed-certs-106982: {Iface:virbr1 ExpiryTime:2024-10-01 21:21:05 +0000 UTC Type:0 Mac:52:54:00:7f:ab:67 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:embed-certs-106982 Clientid:01:52:54:00:7f:ab:67}
	I1001 20:26:20.825205   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined IP address 192.168.39.203 and MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.825425   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHPort
	I1001 20:26:20.825577   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHKeyPath
	I1001 20:26:20.825697   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHUsername
	I1001 20:26:20.825835   65263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/embed-certs-106982/id_rsa Username:docker}
	I1001 20:26:20.984756   65263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:26:21.014051   65263 node_ready.go:35] waiting up to 6m0s for node "embed-certs-106982" to be "Ready" ...
	I1001 20:26:21.023227   65263 node_ready.go:49] node "embed-certs-106982" has status "Ready":"True"
	I1001 20:26:21.023274   65263 node_ready.go:38] duration metric: took 9.170523ms for node "embed-certs-106982" to be "Ready" ...
	I1001 20:26:21.023286   65263 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:26:21.029371   65263 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:21.113480   65263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1001 20:26:21.113509   65263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1001 20:26:21.138000   65263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1001 20:26:21.138028   65263 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1001 20:26:21.162057   65263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:26:21.240772   65263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 20:26:21.251310   65263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 20:26:21.251337   65263 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1001 20:26:21.316994   65263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 20:26:22.282775   65263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.041963655s)
	I1001 20:26:22.282809   65263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.120713974s)
	I1001 20:26:22.282835   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.282849   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.282849   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.282864   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.283226   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.283243   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.283256   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.283265   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.283244   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.283298   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.283311   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.283275   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.283278   65263 main.go:141] libmachine: (embed-certs-106982) DBG | Closing plugin on server side
	I1001 20:26:22.283808   65263 main.go:141] libmachine: (embed-certs-106982) DBG | Closing plugin on server side
	I1001 20:26:22.283808   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.283839   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.283892   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.283907   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.342382   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.342407   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.342708   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.342732   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.434882   65263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.117844425s)
	I1001 20:26:22.434937   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.434950   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.435276   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.435291   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.435301   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.435309   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.435554   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.435582   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.435593   65263 addons.go:475] Verifying addon metrics-server=true in "embed-certs-106982"
	I1001 20:26:22.437796   65263 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1001 20:26:22.438856   65263 addons.go:510] duration metric: took 1.678119807s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1001 20:26:21.492616   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:22.263077   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:24.761931   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:23.036676   65263 pod_ready.go:103] pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:25.537836   65263 pod_ready.go:103] pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:26.536827   65263 pod_ready.go:93] pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:26.536853   65263 pod_ready.go:82] duration metric: took 5.507455172s for pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:26.536865   65263 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wfdwp" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:26.541397   65263 pod_ready.go:93] pod "coredns-7c65d6cfc9-wfdwp" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:26.541427   65263 pod_ready.go:82] duration metric: took 4.554335ms for pod "coredns-7c65d6cfc9-wfdwp" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:26.541436   65263 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.048586   65263 pod_ready.go:93] pod "etcd-embed-certs-106982" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.048612   65263 pod_ready.go:82] duration metric: took 507.170207ms for pod "etcd-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.048622   65263 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.053967   65263 pod_ready.go:93] pod "kube-apiserver-embed-certs-106982" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.053994   65263 pod_ready.go:82] duration metric: took 5.365871ms for pod "kube-apiserver-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.054007   65263 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.059419   65263 pod_ready.go:93] pod "kube-controller-manager-embed-certs-106982" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.059441   65263 pod_ready.go:82] duration metric: took 5.427863ms for pod "kube-controller-manager-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.059452   65263 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fjnvc" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.333488   65263 pod_ready.go:93] pod "kube-proxy-fjnvc" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.333512   65263 pod_ready.go:82] duration metric: took 274.054021ms for pod "kube-proxy-fjnvc" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.333521   65263 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.733368   65263 pod_ready.go:93] pod "kube-scheduler-embed-certs-106982" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.733392   65263 pod_ready.go:82] duration metric: took 399.861423ms for pod "kube-scheduler-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.733400   65263 pod_ready.go:39] duration metric: took 6.710101442s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:26:27.733422   65263 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:26:27.733476   65263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:26:27.750336   65263 api_server.go:72] duration metric: took 6.989620923s to wait for apiserver process to appear ...
	I1001 20:26:27.750367   65263 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:26:27.750389   65263 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I1001 20:26:27.755350   65263 api_server.go:279] https://192.168.39.203:8443/healthz returned 200:
	ok
	I1001 20:26:27.756547   65263 api_server.go:141] control plane version: v1.31.1
	I1001 20:26:27.756572   65263 api_server.go:131] duration metric: took 6.196295ms to wait for apiserver health ...
	I1001 20:26:27.756583   65263 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:26:27.937329   65263 system_pods.go:59] 9 kube-system pods found
	I1001 20:26:27.937364   65263 system_pods.go:61] "coredns-7c65d6cfc9-rq5ms" [652fcc3d-ae12-4e11-b212-8891c1c05701] Running
	I1001 20:26:27.937373   65263 system_pods.go:61] "coredns-7c65d6cfc9-wfdwp" [1174cd48-6855-4813-9ecd-3b3a82386720] Running
	I1001 20:26:27.937380   65263 system_pods.go:61] "etcd-embed-certs-106982" [84d678ad-7322-48d0-8bab-6c683d3cf8a5] Running
	I1001 20:26:27.937386   65263 system_pods.go:61] "kube-apiserver-embed-certs-106982" [93d7fba8-306f-4b04-b65b-e3d4442f9ba6] Running
	I1001 20:26:27.937392   65263 system_pods.go:61] "kube-controller-manager-embed-certs-106982" [5e405af0-a942-4040-a955-8a007c2fc6e9] Running
	I1001 20:26:27.937396   65263 system_pods.go:61] "kube-proxy-fjnvc" [728b1b90-5961-45e9-9818-8fc6f6db1634] Running
	I1001 20:26:27.937402   65263 system_pods.go:61] "kube-scheduler-embed-certs-106982" [c0289891-9235-44de-a3cb-669648f5c18e] Running
	I1001 20:26:27.937416   65263 system_pods.go:61] "metrics-server-6867b74b74-z27sl" [dd1b4cdc-5ffb-4214-96c9-569ef4f7ba09] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:26:27.937427   65263 system_pods.go:61] "storage-provisioner" [3aaab1f2-8361-46c6-88be-ed9004628715] Running
	I1001 20:26:27.937441   65263 system_pods.go:74] duration metric: took 180.849735ms to wait for pod list to return data ...
	I1001 20:26:27.937453   65263 default_sa.go:34] waiting for default service account to be created ...
	I1001 20:26:28.133918   65263 default_sa.go:45] found service account: "default"
	I1001 20:26:28.133945   65263 default_sa.go:55] duration metric: took 196.482206ms for default service account to be created ...
	I1001 20:26:28.133955   65263 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 20:26:28.335883   65263 system_pods.go:86] 9 kube-system pods found
	I1001 20:26:28.335916   65263 system_pods.go:89] "coredns-7c65d6cfc9-rq5ms" [652fcc3d-ae12-4e11-b212-8891c1c05701] Running
	I1001 20:26:28.335923   65263 system_pods.go:89] "coredns-7c65d6cfc9-wfdwp" [1174cd48-6855-4813-9ecd-3b3a82386720] Running
	I1001 20:26:28.335927   65263 system_pods.go:89] "etcd-embed-certs-106982" [84d678ad-7322-48d0-8bab-6c683d3cf8a5] Running
	I1001 20:26:28.335931   65263 system_pods.go:89] "kube-apiserver-embed-certs-106982" [93d7fba8-306f-4b04-b65b-e3d4442f9ba6] Running
	I1001 20:26:28.335935   65263 system_pods.go:89] "kube-controller-manager-embed-certs-106982" [5e405af0-a942-4040-a955-8a007c2fc6e9] Running
	I1001 20:26:28.335939   65263 system_pods.go:89] "kube-proxy-fjnvc" [728b1b90-5961-45e9-9818-8fc6f6db1634] Running
	I1001 20:26:28.335942   65263 system_pods.go:89] "kube-scheduler-embed-certs-106982" [c0289891-9235-44de-a3cb-669648f5c18e] Running
	I1001 20:26:28.335947   65263 system_pods.go:89] "metrics-server-6867b74b74-z27sl" [dd1b4cdc-5ffb-4214-96c9-569ef4f7ba09] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:26:28.335951   65263 system_pods.go:89] "storage-provisioner" [3aaab1f2-8361-46c6-88be-ed9004628715] Running
	I1001 20:26:28.335959   65263 system_pods.go:126] duration metric: took 202.000148ms to wait for k8s-apps to be running ...
	I1001 20:26:28.335967   65263 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 20:26:28.336013   65263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:26:28.350578   65263 system_svc.go:56] duration metric: took 14.603568ms WaitForService to wait for kubelet
	I1001 20:26:28.350608   65263 kubeadm.go:582] duration metric: took 7.589898283s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:26:28.350630   65263 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:26:28.533508   65263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:26:28.533533   65263 node_conditions.go:123] node cpu capacity is 2
	I1001 20:26:28.533544   65263 node_conditions.go:105] duration metric: took 182.908473ms to run NodePressure ...
	I1001 20:26:28.533554   65263 start.go:241] waiting for startup goroutines ...
	I1001 20:26:28.533561   65263 start.go:246] waiting for cluster config update ...
	I1001 20:26:28.533571   65263 start.go:255] writing updated cluster config ...
	I1001 20:26:28.533862   65263 ssh_runner.go:195] Run: rm -f paused
	I1001 20:26:28.580991   65263 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 20:26:28.583612   65263 out.go:177] * Done! kubectl is now configured to use "embed-certs-106982" cluster and "default" namespace by default
	I1001 20:26:27.572585   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:30.648588   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:27.262297   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:29.761795   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:31.762340   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:34.261713   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:35.263742   64676 pod_ready.go:82] duration metric: took 4m0.008218565s for pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace to be "Ready" ...
	E1001 20:26:35.263766   64676 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1001 20:26:35.263774   64676 pod_ready.go:39] duration metric: took 4m6.044360969s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:26:35.263791   64676 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:26:35.263820   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:26:35.263879   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:26:35.314427   64676 cri.go:89] found id: "a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:35.314450   64676 cri.go:89] found id: ""
	I1001 20:26:35.314457   64676 logs.go:276] 1 containers: [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd]
	I1001 20:26:35.314510   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.319554   64676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:26:35.319627   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:26:35.352986   64676 cri.go:89] found id: "586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:35.353006   64676 cri.go:89] found id: ""
	I1001 20:26:35.353013   64676 logs.go:276] 1 containers: [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f]
	I1001 20:26:35.353061   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.356979   64676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:26:35.357044   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:26:35.397175   64676 cri.go:89] found id: "4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:35.397196   64676 cri.go:89] found id: ""
	I1001 20:26:35.397203   64676 logs.go:276] 1 containers: [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6]
	I1001 20:26:35.397250   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.401025   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:26:35.401108   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:26:35.434312   64676 cri.go:89] found id: "89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:35.434333   64676 cri.go:89] found id: ""
	I1001 20:26:35.434340   64676 logs.go:276] 1 containers: [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d]
	I1001 20:26:35.434400   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.438325   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:26:35.438385   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:26:35.480711   64676 cri.go:89] found id: "fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:35.480738   64676 cri.go:89] found id: ""
	I1001 20:26:35.480750   64676 logs.go:276] 1 containers: [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8]
	I1001 20:26:35.480795   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.484996   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:26:35.485073   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:26:35.524876   64676 cri.go:89] found id: "69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:35.524909   64676 cri.go:89] found id: ""
	I1001 20:26:35.524920   64676 logs.go:276] 1 containers: [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf]
	I1001 20:26:35.524984   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.529297   64676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:26:35.529366   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:26:35.564110   64676 cri.go:89] found id: ""
	I1001 20:26:35.564138   64676 logs.go:276] 0 containers: []
	W1001 20:26:35.564149   64676 logs.go:278] No container was found matching "kindnet"
	I1001 20:26:35.564157   64676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1001 20:26:35.564222   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1001 20:26:35.599279   64676 cri.go:89] found id: "a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:35.599311   64676 cri.go:89] found id: "652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:35.599318   64676 cri.go:89] found id: ""
	I1001 20:26:35.599327   64676 logs.go:276] 2 containers: [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b]
	I1001 20:26:35.599379   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.603377   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.607668   64676 logs.go:123] Gathering logs for kubelet ...
	I1001 20:26:35.607698   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:26:35.678017   64676 logs.go:123] Gathering logs for coredns [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6] ...
	I1001 20:26:35.678053   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:35.717814   64676 logs.go:123] Gathering logs for kube-proxy [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8] ...
	I1001 20:26:35.717842   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:35.752647   64676 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:26:35.752680   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:26:36.259582   64676 logs.go:123] Gathering logs for container status ...
	I1001 20:26:36.259630   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:26:36.299857   64676 logs.go:123] Gathering logs for storage-provisioner [652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b] ...
	I1001 20:26:36.299892   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:36.339923   64676 logs.go:123] Gathering logs for dmesg ...
	I1001 20:26:36.339973   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:26:36.353728   64676 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:26:36.353763   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 20:26:36.728608   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:39.796591   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:36.482029   64676 logs.go:123] Gathering logs for kube-apiserver [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd] ...
	I1001 20:26:36.482071   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:36.525705   64676 logs.go:123] Gathering logs for etcd [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f] ...
	I1001 20:26:36.525741   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:36.566494   64676 logs.go:123] Gathering logs for kube-scheduler [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d] ...
	I1001 20:26:36.566529   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:36.602489   64676 logs.go:123] Gathering logs for kube-controller-manager [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf] ...
	I1001 20:26:36.602523   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:36.666726   64676 logs.go:123] Gathering logs for storage-provisioner [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa] ...
	I1001 20:26:36.666757   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:39.203217   64676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:26:39.220220   64676 api_server.go:72] duration metric: took 4m17.274155342s to wait for apiserver process to appear ...
	I1001 20:26:39.220253   64676 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:26:39.220301   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:26:39.220372   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:26:39.261710   64676 cri.go:89] found id: "a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:39.261739   64676 cri.go:89] found id: ""
	I1001 20:26:39.261749   64676 logs.go:276] 1 containers: [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd]
	I1001 20:26:39.261804   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.265994   64676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:26:39.266057   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:26:39.298615   64676 cri.go:89] found id: "586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:39.298642   64676 cri.go:89] found id: ""
	I1001 20:26:39.298650   64676 logs.go:276] 1 containers: [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f]
	I1001 20:26:39.298694   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.302584   64676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:26:39.302647   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:26:39.338062   64676 cri.go:89] found id: "4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:39.338091   64676 cri.go:89] found id: ""
	I1001 20:26:39.338102   64676 logs.go:276] 1 containers: [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6]
	I1001 20:26:39.338157   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.342553   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:26:39.342613   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:26:39.379787   64676 cri.go:89] found id: "89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:39.379818   64676 cri.go:89] found id: ""
	I1001 20:26:39.379828   64676 logs.go:276] 1 containers: [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d]
	I1001 20:26:39.379885   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.384397   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:26:39.384454   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:26:39.419175   64676 cri.go:89] found id: "fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:39.419204   64676 cri.go:89] found id: ""
	I1001 20:26:39.419215   64676 logs.go:276] 1 containers: [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8]
	I1001 20:26:39.419275   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.423113   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:26:39.423184   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:26:39.455948   64676 cri.go:89] found id: "69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:39.455974   64676 cri.go:89] found id: ""
	I1001 20:26:39.455984   64676 logs.go:276] 1 containers: [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf]
	I1001 20:26:39.456040   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.459912   64676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:26:39.459978   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:26:39.504152   64676 cri.go:89] found id: ""
	I1001 20:26:39.504179   64676 logs.go:276] 0 containers: []
	W1001 20:26:39.504187   64676 logs.go:278] No container was found matching "kindnet"
	I1001 20:26:39.504192   64676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1001 20:26:39.504241   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1001 20:26:39.538918   64676 cri.go:89] found id: "a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:39.538940   64676 cri.go:89] found id: "652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:39.538947   64676 cri.go:89] found id: ""
	I1001 20:26:39.538957   64676 logs.go:276] 2 containers: [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b]
	I1001 20:26:39.539013   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.542832   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.546365   64676 logs.go:123] Gathering logs for storage-provisioner [652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b] ...
	I1001 20:26:39.546395   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:39.589286   64676 logs.go:123] Gathering logs for kubelet ...
	I1001 20:26:39.589320   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:26:39.657412   64676 logs.go:123] Gathering logs for dmesg ...
	I1001 20:26:39.657447   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:26:39.671553   64676 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:26:39.671581   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 20:26:39.786194   64676 logs.go:123] Gathering logs for kube-apiserver [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd] ...
	I1001 20:26:39.786226   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:39.829798   64676 logs.go:123] Gathering logs for coredns [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6] ...
	I1001 20:26:39.829831   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:39.865854   64676 logs.go:123] Gathering logs for kube-controller-manager [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf] ...
	I1001 20:26:39.865890   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:39.920702   64676 logs.go:123] Gathering logs for storage-provisioner [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa] ...
	I1001 20:26:39.920735   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:39.959343   64676 logs.go:123] Gathering logs for etcd [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f] ...
	I1001 20:26:39.959375   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:40.001320   64676 logs.go:123] Gathering logs for kube-scheduler [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d] ...
	I1001 20:26:40.001354   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:40.037182   64676 logs.go:123] Gathering logs for kube-proxy [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8] ...
	I1001 20:26:40.037214   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:40.070072   64676 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:26:40.070098   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:26:40.492733   64676 logs.go:123] Gathering logs for container status ...
	I1001 20:26:40.492770   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:26:43.042801   64676 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I1001 20:26:43.048223   64676 api_server.go:279] https://192.168.61.93:8443/healthz returned 200:
	ok
	I1001 20:26:43.049199   64676 api_server.go:141] control plane version: v1.31.1
	I1001 20:26:43.049229   64676 api_server.go:131] duration metric: took 3.828968104s to wait for apiserver health ...
	I1001 20:26:43.049239   64676 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:26:43.049267   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:26:43.049331   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:26:43.087098   64676 cri.go:89] found id: "a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:43.087132   64676 cri.go:89] found id: ""
	I1001 20:26:43.087144   64676 logs.go:276] 1 containers: [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd]
	I1001 20:26:43.087206   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.091606   64676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:26:43.091665   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:26:43.127154   64676 cri.go:89] found id: "586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:43.127177   64676 cri.go:89] found id: ""
	I1001 20:26:43.127184   64676 logs.go:276] 1 containers: [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f]
	I1001 20:26:43.127227   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.131246   64676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:26:43.131320   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:26:43.165473   64676 cri.go:89] found id: "4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:43.165503   64676 cri.go:89] found id: ""
	I1001 20:26:43.165514   64676 logs.go:276] 1 containers: [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6]
	I1001 20:26:43.165577   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.169908   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:26:43.169982   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:26:43.210196   64676 cri.go:89] found id: "89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:43.210225   64676 cri.go:89] found id: ""
	I1001 20:26:43.210235   64676 logs.go:276] 1 containers: [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d]
	I1001 20:26:43.210302   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.214253   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:26:43.214317   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:26:43.249533   64676 cri.go:89] found id: "fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:43.249555   64676 cri.go:89] found id: ""
	I1001 20:26:43.249563   64676 logs.go:276] 1 containers: [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8]
	I1001 20:26:43.249625   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.253555   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:26:43.253633   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:26:43.294711   64676 cri.go:89] found id: "69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:43.294734   64676 cri.go:89] found id: ""
	I1001 20:26:43.294742   64676 logs.go:276] 1 containers: [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf]
	I1001 20:26:43.294787   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.298960   64676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:26:43.299037   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:26:43.339542   64676 cri.go:89] found id: ""
	I1001 20:26:43.339572   64676 logs.go:276] 0 containers: []
	W1001 20:26:43.339582   64676 logs.go:278] No container was found matching "kindnet"
	I1001 20:26:43.339588   64676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1001 20:26:43.339667   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1001 20:26:43.382206   64676 cri.go:89] found id: "a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:43.382230   64676 cri.go:89] found id: "652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:43.382234   64676 cri.go:89] found id: ""
	I1001 20:26:43.382241   64676 logs.go:276] 2 containers: [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b]
	I1001 20:26:43.382289   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.386473   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.390146   64676 logs.go:123] Gathering logs for kubelet ...
	I1001 20:26:43.390172   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:26:43.457659   64676 logs.go:123] Gathering logs for dmesg ...
	I1001 20:26:43.457699   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:26:43.471078   64676 logs.go:123] Gathering logs for kube-apiserver [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd] ...
	I1001 20:26:43.471109   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:43.518058   64676 logs.go:123] Gathering logs for etcd [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f] ...
	I1001 20:26:43.518093   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:43.559757   64676 logs.go:123] Gathering logs for kube-proxy [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8] ...
	I1001 20:26:43.559788   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:43.595485   64676 logs.go:123] Gathering logs for storage-provisioner [652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b] ...
	I1001 20:26:43.595513   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:43.628167   64676 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:26:43.628195   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 20:26:43.741206   64676 logs.go:123] Gathering logs for coredns [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6] ...
	I1001 20:26:43.741234   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:43.777220   64676 logs.go:123] Gathering logs for kube-scheduler [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d] ...
	I1001 20:26:43.777248   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:43.817507   64676 logs.go:123] Gathering logs for kube-controller-manager [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf] ...
	I1001 20:26:43.817536   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:43.880127   64676 logs.go:123] Gathering logs for storage-provisioner [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa] ...
	I1001 20:26:43.880161   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:43.915172   64676 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:26:43.915199   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:26:44.289237   64676 logs.go:123] Gathering logs for container status ...
	I1001 20:26:44.289277   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:26:46.835363   64676 system_pods.go:59] 8 kube-system pods found
	I1001 20:26:46.835393   64676 system_pods.go:61] "coredns-7c65d6cfc9-g8jf8" [7fbddef1-a564-4ee8-ab53-ae838d0fd984] Running
	I1001 20:26:46.835398   64676 system_pods.go:61] "etcd-no-preload-262337" [086d7949-d20d-49d8-871d-a464de60e4cb] Running
	I1001 20:26:46.835402   64676 system_pods.go:61] "kube-apiserver-no-preload-262337" [d8473136-4e07-43e2-bd20-65232e2d5102] Running
	I1001 20:26:46.835405   64676 system_pods.go:61] "kube-controller-manager-no-preload-262337" [63c7d071-20cd-48c5-b410-b78e339b0731] Running
	I1001 20:26:46.835408   64676 system_pods.go:61] "kube-proxy-7rrkn" [e25a055c-0203-4fe7-8801-560b9cdb27bb] Running
	I1001 20:26:46.835412   64676 system_pods.go:61] "kube-scheduler-no-preload-262337" [3b962e64-eea6-4c24-a230-32c40106a4dd] Running
	I1001 20:26:46.835418   64676 system_pods.go:61] "metrics-server-6867b74b74-2rpwt" [235515ab-28fc-437b-983a-243f7a8fb183] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:26:46.835422   64676 system_pods.go:61] "storage-provisioner" [8832193a-39b4-49b9-b943-3241bb27fb8d] Running
	I1001 20:26:46.835431   64676 system_pods.go:74] duration metric: took 3.786183909s to wait for pod list to return data ...
	I1001 20:26:46.835441   64676 default_sa.go:34] waiting for default service account to be created ...
	I1001 20:26:46.838345   64676 default_sa.go:45] found service account: "default"
	I1001 20:26:46.838367   64676 default_sa.go:55] duration metric: took 2.918089ms for default service account to be created ...
	I1001 20:26:46.838375   64676 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 20:26:46.844822   64676 system_pods.go:86] 8 kube-system pods found
	I1001 20:26:46.844850   64676 system_pods.go:89] "coredns-7c65d6cfc9-g8jf8" [7fbddef1-a564-4ee8-ab53-ae838d0fd984] Running
	I1001 20:26:46.844856   64676 system_pods.go:89] "etcd-no-preload-262337" [086d7949-d20d-49d8-871d-a464de60e4cb] Running
	I1001 20:26:46.844860   64676 system_pods.go:89] "kube-apiserver-no-preload-262337" [d8473136-4e07-43e2-bd20-65232e2d5102] Running
	I1001 20:26:46.844863   64676 system_pods.go:89] "kube-controller-manager-no-preload-262337" [63c7d071-20cd-48c5-b410-b78e339b0731] Running
	I1001 20:26:46.844867   64676 system_pods.go:89] "kube-proxy-7rrkn" [e25a055c-0203-4fe7-8801-560b9cdb27bb] Running
	I1001 20:26:46.844870   64676 system_pods.go:89] "kube-scheduler-no-preload-262337" [3b962e64-eea6-4c24-a230-32c40106a4dd] Running
	I1001 20:26:46.844876   64676 system_pods.go:89] "metrics-server-6867b74b74-2rpwt" [235515ab-28fc-437b-983a-243f7a8fb183] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:26:46.844881   64676 system_pods.go:89] "storage-provisioner" [8832193a-39b4-49b9-b943-3241bb27fb8d] Running
	I1001 20:26:46.844889   64676 system_pods.go:126] duration metric: took 6.508902ms to wait for k8s-apps to be running ...
	I1001 20:26:46.844895   64676 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 20:26:46.844934   64676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:26:46.861543   64676 system_svc.go:56] duration metric: took 16.63712ms WaitForService to wait for kubelet
	I1001 20:26:46.861586   64676 kubeadm.go:582] duration metric: took 4m24.915538002s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:26:46.861614   64676 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:26:46.864599   64676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:26:46.864632   64676 node_conditions.go:123] node cpu capacity is 2
	I1001 20:26:46.864644   64676 node_conditions.go:105] duration metric: took 3.023838ms to run NodePressure ...
	I1001 20:26:46.864657   64676 start.go:241] waiting for startup goroutines ...
	I1001 20:26:46.864667   64676 start.go:246] waiting for cluster config update ...
	I1001 20:26:46.864682   64676 start.go:255] writing updated cluster config ...
	I1001 20:26:46.864960   64676 ssh_runner.go:195] Run: rm -f paused
	I1001 20:26:46.924982   64676 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 20:26:46.926817   64676 out.go:177] * Done! kubectl is now configured to use "no-preload-262337" cluster and "default" namespace by default
	I1001 20:26:45.880599   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:48.948631   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:55.028660   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:58.100570   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:04.180661   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:07.252656   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:13.332644   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:16.404640   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:22.484714   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:25.556606   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:31.636609   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:34.712725   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:40.788632   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:43.940129   65592 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1001 20:27:43.940232   65592 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1001 20:27:43.942002   65592 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1001 20:27:43.942068   65592 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:27:43.942170   65592 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:27:43.942281   65592 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:27:43.942421   65592 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 20:27:43.942518   65592 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:27:43.944271   65592 out.go:235]   - Generating certificates and keys ...
	I1001 20:27:43.944389   65592 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:27:43.944486   65592 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:27:43.944600   65592 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 20:27:43.944693   65592 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 20:27:43.944797   65592 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 20:27:43.944888   65592 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 20:27:43.944985   65592 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 20:27:43.945072   65592 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 20:27:43.945190   65592 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 20:27:43.945301   65592 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 20:27:43.945361   65592 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 20:27:43.945420   65592 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:27:43.945467   65592 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:27:43.945515   65592 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:27:43.945585   65592 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:27:43.945651   65592 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:27:43.945772   65592 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:27:43.945899   65592 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:27:43.945961   65592 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:27:43.946057   65592 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:27:43.860704   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:43.947517   65592 out.go:235]   - Booting up control plane ...
	I1001 20:27:43.947644   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:27:43.947767   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:27:43.947861   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:27:43.947978   65592 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:27:43.948185   65592 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 20:27:43.948258   65592 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1001 20:27:43.948396   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.948618   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.948695   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.948930   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.948991   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.949149   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.949232   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.949380   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.949439   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.949597   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.949616   65592 kubeadm.go:310] 
	I1001 20:27:43.949658   65592 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1001 20:27:43.949693   65592 kubeadm.go:310] 		timed out waiting for the condition
	I1001 20:27:43.949704   65592 kubeadm.go:310] 
	I1001 20:27:43.949737   65592 kubeadm.go:310] 	This error is likely caused by:
	I1001 20:27:43.949766   65592 kubeadm.go:310] 		- The kubelet is not running
	I1001 20:27:43.949863   65592 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1001 20:27:43.949871   65592 kubeadm.go:310] 
	I1001 20:27:43.949968   65592 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1001 20:27:43.950000   65592 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1001 20:27:43.950034   65592 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1001 20:27:43.950040   65592 kubeadm.go:310] 
	I1001 20:27:43.950136   65592 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1001 20:27:43.950207   65592 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1001 20:27:43.950213   65592 kubeadm.go:310] 
	I1001 20:27:43.950310   65592 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1001 20:27:43.950389   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1001 20:27:43.950454   65592 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1001 20:27:43.950533   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1001 20:27:43.950566   65592 kubeadm.go:310] 
	W1001 20:27:43.950665   65592 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1001 20:27:43.950707   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 20:27:44.404995   65592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:27:44.421130   65592 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:27:44.431204   65592 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:27:44.431228   65592 kubeadm.go:157] found existing configuration files:
	
	I1001 20:27:44.431270   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:27:44.440792   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:27:44.440857   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:27:44.450469   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:27:44.459640   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:27:44.459695   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:27:44.469335   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:27:44.478848   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:27:44.478904   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:27:44.489162   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:27:44.501070   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:27:44.501157   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:27:44.511970   65592 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:27:44.728685   65592 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:27:49.940611   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:53.016657   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:59.092700   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:02.164611   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:08.244707   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:11.316686   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:17.400607   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:20.468660   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:26.548687   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:29.624608   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:35.700638   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:38.772693   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:44.852721   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:47.924690   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:54.004674   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:57.080644   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:03.156750   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:06.232700   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:12.308749   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:15.380633   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:18.381649   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:29:18.381689   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetMachineName
	I1001 20:29:18.382037   68418 buildroot.go:166] provisioning hostname "default-k8s-diff-port-878552"
	I1001 20:29:18.382063   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetMachineName
	I1001 20:29:18.382291   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:18.384714   68418 machine.go:96] duration metric: took 4m37.419094583s to provisionDockerMachine
	I1001 20:29:18.384772   68418 fix.go:56] duration metric: took 4m37.442164125s for fixHost
	I1001 20:29:18.384782   68418 start.go:83] releasing machines lock for "default-k8s-diff-port-878552", held for 4m37.442187455s
	W1001 20:29:18.384813   68418 start.go:714] error starting host: provision: host is not running
	W1001 20:29:18.384993   68418 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1001 20:29:18.385017   68418 start.go:729] Will try again in 5 seconds ...
	I1001 20:29:23.387086   68418 start.go:360] acquireMachinesLock for default-k8s-diff-port-878552: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 20:29:23.387232   68418 start.go:364] duration metric: took 101.596µs to acquireMachinesLock for "default-k8s-diff-port-878552"
	I1001 20:29:23.387273   68418 start.go:96] Skipping create...Using existing machine configuration
	I1001 20:29:23.387284   68418 fix.go:54] fixHost starting: 
	I1001 20:29:23.387645   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:29:23.387669   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:29:23.403371   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39149
	I1001 20:29:23.404008   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:29:23.404580   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:29:23.404603   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:29:23.405181   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:29:23.405410   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:23.405560   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:29:23.407563   68418 fix.go:112] recreateIfNeeded on default-k8s-diff-port-878552: state=Stopped err=<nil>
	I1001 20:29:23.407589   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	W1001 20:29:23.407771   68418 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 20:29:23.409721   68418 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-878552" ...
	I1001 20:29:23.410973   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Start
	I1001 20:29:23.411207   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Ensuring networks are active...
	I1001 20:29:23.412117   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Ensuring network default is active
	I1001 20:29:23.412576   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Ensuring network mk-default-k8s-diff-port-878552 is active
	I1001 20:29:23.412956   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Getting domain xml...
	I1001 20:29:23.413589   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Creating domain...
	I1001 20:29:24.744972   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting to get IP...
	I1001 20:29:24.746001   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:24.746641   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:24.746710   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:24.746607   69521 retry.go:31] will retry after 260.966833ms: waiting for machine to come up
	I1001 20:29:25.009284   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.009825   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.009849   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:25.009778   69521 retry.go:31] will retry after 308.10041ms: waiting for machine to come up
	I1001 20:29:25.319153   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.319717   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.319752   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:25.319652   69521 retry.go:31] will retry after 342.802984ms: waiting for machine to come up
	I1001 20:29:25.664405   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.664893   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.664920   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:25.664816   69521 retry.go:31] will retry after 397.002924ms: waiting for machine to come up
	I1001 20:29:26.063628   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:26.064235   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:26.064259   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:26.064201   69521 retry.go:31] will retry after 526.648832ms: waiting for machine to come up
	I1001 20:29:26.592834   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:26.593284   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:26.593307   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:26.593226   69521 retry.go:31] will retry after 642.569388ms: waiting for machine to come up
	I1001 20:29:27.237224   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:27.237775   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:27.237808   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:27.237714   69521 retry.go:31] will retry after 963.05932ms: waiting for machine to come up
	I1001 20:29:28.202841   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:28.203333   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:28.203363   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:28.203287   69521 retry.go:31] will retry after 1.372004234s: waiting for machine to come up
	I1001 20:29:29.577175   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:29.577678   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:29.577706   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:29.577627   69521 retry.go:31] will retry after 1.693508507s: waiting for machine to come up
	I1001 20:29:31.273758   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:31.274247   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:31.274274   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:31.274201   69521 retry.go:31] will retry after 1.793304779s: waiting for machine to come up
	I1001 20:29:33.069467   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:33.069894   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:33.069915   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:33.069861   69521 retry.go:31] will retry after 2.825253867s: waiting for machine to come up
	I1001 20:29:40.678676   65592 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1001 20:29:40.678797   65592 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1001 20:29:40.680563   65592 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1001 20:29:40.680613   65592 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:29:40.680680   65592 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:29:40.680788   65592 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:29:40.680868   65592 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 20:29:40.681030   65592 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:29:40.683042   65592 out.go:235]   - Generating certificates and keys ...
	I1001 20:29:40.683149   65592 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:29:40.683245   65592 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:29:40.683353   65592 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 20:29:40.683435   65592 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 20:29:40.683545   65592 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 20:29:40.683605   65592 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 20:29:40.683665   65592 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 20:29:40.683723   65592 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 20:29:40.683793   65592 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 20:29:40.683878   65592 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 20:29:40.683956   65592 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 20:29:40.684054   65592 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:29:40.684127   65592 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:29:40.684212   65592 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:29:40.684303   65592 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:29:40.684414   65592 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:29:40.684551   65592 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:29:40.684661   65592 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:29:40.684724   65592 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:29:40.684827   65592 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:29:35.897417   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:35.897916   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:35.897949   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:35.897862   69521 retry.go:31] will retry after 3.519866937s: waiting for machine to come up
	I1001 20:29:39.419142   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:39.419528   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:39.419554   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:39.419494   69521 retry.go:31] will retry after 3.507101438s: waiting for machine to come up
	I1001 20:29:40.686427   65592 out.go:235]   - Booting up control plane ...
	I1001 20:29:40.686534   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:29:40.686621   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:29:40.686710   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:29:40.686820   65592 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:29:40.686996   65592 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 20:29:40.687063   65592 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1001 20:29:40.687127   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.687336   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.687443   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.687674   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.687759   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.687958   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.688047   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.688212   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.688274   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.688510   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.688519   65592 kubeadm.go:310] 
	I1001 20:29:40.688566   65592 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1001 20:29:40.688610   65592 kubeadm.go:310] 		timed out waiting for the condition
	I1001 20:29:40.688617   65592 kubeadm.go:310] 
	I1001 20:29:40.688646   65592 kubeadm.go:310] 	This error is likely caused by:
	I1001 20:29:40.688680   65592 kubeadm.go:310] 		- The kubelet is not running
	I1001 20:29:40.688770   65592 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1001 20:29:40.688778   65592 kubeadm.go:310] 
	I1001 20:29:40.688882   65592 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1001 20:29:40.688937   65592 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1001 20:29:40.688986   65592 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1001 20:29:40.688996   65592 kubeadm.go:310] 
	I1001 20:29:40.689114   65592 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1001 20:29:40.689222   65592 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1001 20:29:40.689237   65592 kubeadm.go:310] 
	I1001 20:29:40.689376   65592 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1001 20:29:40.689517   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1001 20:29:40.689638   65592 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1001 20:29:40.689709   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1001 20:29:40.689786   65592 kubeadm.go:310] 
	I1001 20:29:40.689796   65592 kubeadm.go:394] duration metric: took 7m56.416911577s to StartCluster
	I1001 20:29:40.689838   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:29:40.689896   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:29:40.733027   65592 cri.go:89] found id: ""
	I1001 20:29:40.733059   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.733068   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:29:40.733073   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:29:40.733120   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:29:40.767975   65592 cri.go:89] found id: ""
	I1001 20:29:40.768010   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.768021   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:29:40.768029   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:29:40.768095   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:29:40.802624   65592 cri.go:89] found id: ""
	I1001 20:29:40.802657   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.802668   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:29:40.802676   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:29:40.802748   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:29:40.838109   65592 cri.go:89] found id: ""
	I1001 20:29:40.838142   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.838151   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:29:40.838157   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:29:40.838204   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:29:40.873083   65592 cri.go:89] found id: ""
	I1001 20:29:40.873112   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.873124   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:29:40.873131   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:29:40.873192   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:29:40.907675   65592 cri.go:89] found id: ""
	I1001 20:29:40.907705   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.907714   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:29:40.907720   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:29:40.907775   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:29:40.941641   65592 cri.go:89] found id: ""
	I1001 20:29:40.941669   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.941678   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:29:40.941691   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:29:40.941748   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:29:40.978189   65592 cri.go:89] found id: ""
	I1001 20:29:40.978216   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.978227   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:29:40.978238   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:29:40.978254   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:29:41.053798   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:29:41.053823   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:29:41.053835   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:29:41.160669   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:29:41.160715   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:29:41.218152   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:29:41.218182   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:29:41.274784   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:29:41.274821   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1001 20:29:41.288554   65592 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1001 20:29:41.288613   65592 out.go:270] * 
	W1001 20:29:41.288663   65592 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1001 20:29:41.288674   65592 out.go:270] * 
	W1001 20:29:41.289525   65592 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 20:29:41.292969   65592 out.go:201] 
	W1001 20:29:41.294238   65592 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1001 20:29:41.294278   65592 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1001 20:29:41.294297   65592 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1001 20:29:41.295783   65592 out.go:201] 
	I1001 20:29:42.929490   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:42.930036   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has current primary IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:42.930058   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Found IP for machine: 192.168.50.4
	I1001 20:29:42.930091   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Reserving static IP address...
	I1001 20:29:42.930623   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-878552", mac: "52:54:00:72:13:05", ip: "192.168.50.4"} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:42.930660   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | skip adding static IP to network mk-default-k8s-diff-port-878552 - found existing host DHCP lease matching {name: "default-k8s-diff-port-878552", mac: "52:54:00:72:13:05", ip: "192.168.50.4"}
	I1001 20:29:42.930686   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Reserved static IP address: 192.168.50.4
	I1001 20:29:42.930703   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for SSH to be available...
	I1001 20:29:42.930719   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Getting to WaitForSSH function...
	I1001 20:29:42.933472   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:42.933911   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:42.933948   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:42.934106   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Using SSH client type: external
	I1001 20:29:42.934134   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa (-rw-------)
	I1001 20:29:42.934168   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.4 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 20:29:42.934190   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | About to run SSH command:
	I1001 20:29:42.934210   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | exit 0
	I1001 20:29:43.064425   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | SSH cmd err, output: <nil>: 
	I1001 20:29:43.064821   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetConfigRaw
	I1001 20:29:43.065476   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetIP
	I1001 20:29:43.068442   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.068951   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.068982   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.069236   68418 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/config.json ...
	I1001 20:29:43.069476   68418 machine.go:93] provisionDockerMachine start ...
	I1001 20:29:43.069498   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:43.069726   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.072374   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.072720   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.072754   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.072974   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:43.073170   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.073358   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.073501   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:43.073685   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:29:43.073919   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:29:43.073946   68418 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 20:29:43.188588   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1001 20:29:43.188626   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetMachineName
	I1001 20:29:43.188887   68418 buildroot.go:166] provisioning hostname "default-k8s-diff-port-878552"
	I1001 20:29:43.188948   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetMachineName
	I1001 20:29:43.189182   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.192158   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.192550   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.192575   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.192743   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:43.192918   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.193081   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.193193   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:43.193317   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:29:43.193466   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:29:43.193478   68418 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-878552 && echo "default-k8s-diff-port-878552" | sudo tee /etc/hostname
	I1001 20:29:43.318342   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-878552
	
	I1001 20:29:43.318381   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.321205   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.321777   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.321807   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.322031   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:43.322218   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.322360   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.322515   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:43.322729   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:29:43.322907   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:29:43.322925   68418 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-878552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-878552/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-878552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 20:29:43.440839   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:29:43.440884   68418 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 20:29:43.440949   68418 buildroot.go:174] setting up certificates
	I1001 20:29:43.440966   68418 provision.go:84] configureAuth start
	I1001 20:29:43.440982   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetMachineName
	I1001 20:29:43.441238   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetIP
	I1001 20:29:43.443849   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.444223   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.444257   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.444432   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.446569   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.447004   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.447032   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.447130   68418 provision.go:143] copyHostCerts
	I1001 20:29:43.447210   68418 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 20:29:43.447224   68418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 20:29:43.447317   68418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 20:29:43.447430   68418 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 20:29:43.447442   68418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 20:29:43.447484   68418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 20:29:43.447560   68418 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 20:29:43.447570   68418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 20:29:43.447602   68418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 20:29:43.447729   68418 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-878552 san=[127.0.0.1 192.168.50.4 default-k8s-diff-port-878552 localhost minikube]
	I1001 20:29:43.597134   68418 provision.go:177] copyRemoteCerts
	I1001 20:29:43.597195   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 20:29:43.597216   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.599988   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.600379   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.600414   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.600598   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:43.600799   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.600970   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:43.601115   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:29:43.687211   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 20:29:43.714280   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1001 20:29:43.738536   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 20:29:43.764130   68418 provision.go:87] duration metric: took 323.147928ms to configureAuth
	I1001 20:29:43.764163   68418 buildroot.go:189] setting minikube options for container-runtime
	I1001 20:29:43.764353   68418 config.go:182] Loaded profile config "default-k8s-diff-port-878552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:29:43.764462   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.767588   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.767962   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.767991   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.768181   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:43.768339   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.768525   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.768665   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:43.768833   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:29:43.768994   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:29:43.769013   68418 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 20:29:43.998941   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 20:29:43.998964   68418 machine.go:96] duration metric: took 929.475626ms to provisionDockerMachine
	I1001 20:29:43.998976   68418 start.go:293] postStartSetup for "default-k8s-diff-port-878552" (driver="kvm2")
	I1001 20:29:43.998989   68418 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 20:29:43.999008   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:43.999305   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 20:29:43.999332   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:44.001854   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.002381   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:44.002433   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.002555   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:44.002787   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:44.002967   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:44.003142   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:29:44.091378   68418 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 20:29:44.096207   68418 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 20:29:44.096235   68418 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 20:29:44.096315   68418 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 20:29:44.096424   68418 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 20:29:44.096531   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 20:29:44.106232   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:29:44.130524   68418 start.go:296] duration metric: took 131.532724ms for postStartSetup
	I1001 20:29:44.130564   68418 fix.go:56] duration metric: took 20.743280839s for fixHost
	I1001 20:29:44.130589   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:44.133873   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.134285   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:44.134309   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.134509   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:44.134719   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:44.134873   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:44.135025   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:44.135172   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:29:44.135362   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:29:44.135376   68418 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 20:29:44.249136   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727814584.207146331
	
	I1001 20:29:44.249160   68418 fix.go:216] guest clock: 1727814584.207146331
	I1001 20:29:44.249189   68418 fix.go:229] Guest: 2024-10-01 20:29:44.207146331 +0000 UTC Remote: 2024-10-01 20:29:44.13056925 +0000 UTC m=+303.335525185 (delta=76.577081ms)
	I1001 20:29:44.249215   68418 fix.go:200] guest clock delta is within tolerance: 76.577081ms
	I1001 20:29:44.249220   68418 start.go:83] releasing machines lock for "default-k8s-diff-port-878552", held for 20.861972701s
	I1001 20:29:44.249238   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:44.249527   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetIP
	I1001 20:29:44.252984   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.253526   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:44.253569   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.253903   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:44.254449   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:44.254618   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:44.254680   68418 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 20:29:44.254727   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:44.254810   68418 ssh_runner.go:195] Run: cat /version.json
	I1001 20:29:44.254833   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:44.257550   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.257826   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.258077   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:44.258114   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.258363   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:44.258489   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:44.258529   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.258563   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:44.258683   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:44.258784   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:44.258832   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:44.258915   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:29:44.258965   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:44.259113   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:29:44.379049   68418 ssh_runner.go:195] Run: systemctl --version
	I1001 20:29:44.384985   68418 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 20:29:44.527579   68418 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 20:29:44.533267   68418 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 20:29:44.533357   68418 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 20:29:44.552308   68418 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 20:29:44.552333   68418 start.go:495] detecting cgroup driver to use...
	I1001 20:29:44.552421   68418 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 20:29:44.573762   68418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 20:29:44.588010   68418 docker.go:217] disabling cri-docker service (if available) ...
	I1001 20:29:44.588063   68418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 20:29:44.602369   68418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 20:29:44.618754   68418 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 20:29:44.757380   68418 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 20:29:44.941718   68418 docker.go:233] disabling docker service ...
	I1001 20:29:44.941790   68418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 20:29:44.957306   68418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 20:29:44.971723   68418 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 20:29:45.094124   68418 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 20:29:45.220645   68418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 20:29:45.236217   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 20:29:45.255752   68418 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 20:29:45.255820   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.266327   68418 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 20:29:45.266398   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.276964   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.288013   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.298669   68418 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 20:29:45.309693   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.320041   68418 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.336621   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.346862   68418 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 20:29:45.357656   68418 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 20:29:45.357717   68418 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 20:29:45.372693   68418 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 20:29:45.383796   68418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:29:45.524957   68418 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 20:29:45.611630   68418 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 20:29:45.611702   68418 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 20:29:45.616520   68418 start.go:563] Will wait 60s for crictl version
	I1001 20:29:45.616587   68418 ssh_runner.go:195] Run: which crictl
	I1001 20:29:45.620321   68418 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 20:29:45.661806   68418 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 20:29:45.661890   68418 ssh_runner.go:195] Run: crio --version
	I1001 20:29:45.690843   68418 ssh_runner.go:195] Run: crio --version
	I1001 20:29:45.720183   68418 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 20:29:45.721659   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetIP
	I1001 20:29:45.724986   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:45.725349   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:45.725376   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:45.725583   68418 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1001 20:29:45.729522   68418 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:29:45.741877   68418 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-878552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:default-k8s-diff-port-878552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.4 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 20:29:45.742008   68418 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 20:29:45.742051   68418 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:29:45.779002   68418 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1001 20:29:45.779081   68418 ssh_runner.go:195] Run: which lz4
	I1001 20:29:45.782751   68418 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 20:29:45.786704   68418 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 20:29:45.786733   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1001 20:29:47.072431   68418 crio.go:462] duration metric: took 1.289701438s to copy over tarball
	I1001 20:29:47.072508   68418 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 20:29:49.166576   68418 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.094040254s)
	I1001 20:29:49.166604   68418 crio.go:469] duration metric: took 2.094143226s to extract the tarball
	I1001 20:29:49.166613   68418 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 20:29:49.203988   68418 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:29:49.250464   68418 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 20:29:49.250490   68418 cache_images.go:84] Images are preloaded, skipping loading
	I1001 20:29:49.250499   68418 kubeadm.go:934] updating node { 192.168.50.4 8444 v1.31.1 crio true true} ...
	I1001 20:29:49.250612   68418 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-878552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-878552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 20:29:49.250697   68418 ssh_runner.go:195] Run: crio config
	I1001 20:29:49.298003   68418 cni.go:84] Creating CNI manager for ""
	I1001 20:29:49.298024   68418 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:29:49.298032   68418 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 20:29:49.298055   68418 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.4 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-878552 NodeName:default-k8s-diff-port-878552 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 20:29:49.298183   68418 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.4
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-878552"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 20:29:49.298253   68418 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 20:29:49.308945   68418 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 20:29:49.309011   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 20:29:49.319017   68418 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1001 20:29:49.335588   68418 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 20:29:49.351598   68418 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I1001 20:29:49.369172   68418 ssh_runner.go:195] Run: grep 192.168.50.4	control-plane.minikube.internal$ /etc/hosts
	I1001 20:29:49.372755   68418 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.4	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:29:49.385529   68418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:29:49.509676   68418 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:29:49.527149   68418 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552 for IP: 192.168.50.4
	I1001 20:29:49.527170   68418 certs.go:194] generating shared ca certs ...
	I1001 20:29:49.527185   68418 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:29:49.527321   68418 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 20:29:49.527368   68418 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 20:29:49.527378   68418 certs.go:256] generating profile certs ...
	I1001 20:29:49.527456   68418 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/client.key
	I1001 20:29:49.527514   68418 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/apiserver.key.7bbee9b6
	I1001 20:29:49.527555   68418 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/proxy-client.key
	I1001 20:29:49.527668   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 20:29:49.527707   68418 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 20:29:49.527735   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 20:29:49.527772   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 20:29:49.527811   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 20:29:49.527848   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 20:29:49.527907   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:29:49.529210   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 20:29:49.577743   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 20:29:49.617960   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 20:29:49.659543   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 20:29:49.709464   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1001 20:29:49.734308   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 20:29:49.759576   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 20:29:49.784416   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 20:29:49.809150   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 20:29:49.833580   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 20:29:49.857628   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 20:29:49.880924   68418 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 20:29:49.897478   68418 ssh_runner.go:195] Run: openssl version
	I1001 20:29:49.903488   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 20:29:49.914490   68418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:29:49.919105   68418 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:29:49.919165   68418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:29:49.925133   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 20:29:49.936294   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 20:29:49.946630   68418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 20:29:49.951255   68418 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 20:29:49.951308   68418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 20:29:49.957277   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 20:29:49.971166   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 20:29:49.982558   68418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 20:29:49.986947   68418 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 20:29:49.987003   68418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 20:29:49.992569   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 20:29:50.002922   68418 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 20:29:50.007707   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 20:29:50.013717   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 20:29:50.020166   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 20:29:50.026795   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 20:29:50.033544   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 20:29:50.039686   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 20:29:50.045837   68418 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-878552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:default-k8s-diff-port-878552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.4 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:29:50.045971   68418 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 20:29:50.046025   68418 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 20:29:50.086925   68418 cri.go:89] found id: ""
	I1001 20:29:50.086999   68418 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 20:29:50.097130   68418 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1001 20:29:50.097167   68418 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1001 20:29:50.097222   68418 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1001 20:29:50.108298   68418 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1001 20:29:50.109389   68418 kubeconfig.go:125] found "default-k8s-diff-port-878552" server: "https://192.168.50.4:8444"
	I1001 20:29:50.111587   68418 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1001 20:29:50.122158   68418 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.4
	I1001 20:29:50.122199   68418 kubeadm.go:1160] stopping kube-system containers ...
	I1001 20:29:50.122213   68418 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1001 20:29:50.122281   68418 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 20:29:50.160351   68418 cri.go:89] found id: ""
	I1001 20:29:50.160434   68418 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1001 20:29:50.178857   68418 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:29:50.190857   68418 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:29:50.190879   68418 kubeadm.go:157] found existing configuration files:
	
	I1001 20:29:50.190926   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1001 20:29:50.200391   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:29:50.200449   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:29:50.210388   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1001 20:29:50.219943   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:29:50.220007   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:29:50.229576   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1001 20:29:50.239983   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:29:50.240055   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:29:50.251062   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1001 20:29:50.261349   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:29:50.261430   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:29:50.271284   68418 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:29:50.281256   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:50.393255   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:51.469349   68418 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.076029092s)
	I1001 20:29:51.469386   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:51.683522   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:51.749545   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:51.856549   68418 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:29:51.856662   68418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:29:52.356980   68418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:29:52.857568   68418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:29:53.357123   68418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:29:53.372308   68418 api_server.go:72] duration metric: took 1.515757915s to wait for apiserver process to appear ...
	I1001 20:29:53.372341   68418 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:29:53.372387   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:53.372877   68418 api_server.go:269] stopped: https://192.168.50.4:8444/healthz: Get "https://192.168.50.4:8444/healthz": dial tcp 192.168.50.4:8444: connect: connection refused
	I1001 20:29:53.872447   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:56.591087   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1001 20:29:56.591111   68418 api_server.go:103] status: https://192.168.50.4:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1001 20:29:56.591122   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:56.668641   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1001 20:29:56.668672   68418 api_server.go:103] status: https://192.168.50.4:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1001 20:29:56.872906   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:56.882393   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 20:29:56.882433   68418 api_server.go:103] status: https://192.168.50.4:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 20:29:57.372590   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:57.377715   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 20:29:57.377745   68418 api_server.go:103] status: https://192.168.50.4:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 20:29:57.873466   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:57.879628   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 200:
	ok
	I1001 20:29:57.889478   68418 api_server.go:141] control plane version: v1.31.1
	I1001 20:29:57.889512   68418 api_server.go:131] duration metric: took 4.517163838s to wait for apiserver health ...
	I1001 20:29:57.889520   68418 cni.go:84] Creating CNI manager for ""
	I1001 20:29:57.889534   68418 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:29:57.891485   68418 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 20:29:57.892936   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 20:29:57.910485   68418 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 20:29:57.930071   68418 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:29:57.940155   68418 system_pods.go:59] 8 kube-system pods found
	I1001 20:29:57.940191   68418 system_pods.go:61] "coredns-7c65d6cfc9-cmchv" [55a0612c-d596-4799-a9f9-0b6d9361ca15] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 20:29:57.940202   68418 system_pods.go:61] "etcd-default-k8s-diff-port-878552" [bcd7c228-d83d-4eec-9a64-f33dac086dcd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1001 20:29:57.940211   68418 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-878552" [23602015-b245-4e14-a076-2e9efb0f2f66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 20:29:57.940232   68418 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-878552" [e94298d4-75e3-4fbb-b361-6e5248273355] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1001 20:29:57.940239   68418 system_pods.go:61] "kube-proxy-sxxfj" [2bd75205-221e-498e-8a80-1e7a727fd4e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1001 20:29:57.940246   68418 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-878552" [ddcacd2c-3781-46df-83f8-e6763485a55d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 20:29:57.940254   68418 system_pods.go:61] "metrics-server-6867b74b74-b62f8" [26359941-b4d3-442c-ae46-4303a2f7b5e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:29:57.940262   68418 system_pods.go:61] "storage-provisioner" [a34592b0-f9e5-465b-9d64-07cf84f0c951] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1001 20:29:57.940279   68418 system_pods.go:74] duration metric: took 10.189531ms to wait for pod list to return data ...
	I1001 20:29:57.940292   68418 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:29:57.945316   68418 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:29:57.945349   68418 node_conditions.go:123] node cpu capacity is 2
	I1001 20:29:57.945362   68418 node_conditions.go:105] duration metric: took 5.063896ms to run NodePressure ...
	I1001 20:29:57.945380   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:58.233781   68418 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1001 20:29:58.237692   68418 kubeadm.go:739] kubelet initialised
	I1001 20:29:58.237713   68418 kubeadm.go:740] duration metric: took 3.903724ms waiting for restarted kubelet to initialise ...
	I1001 20:29:58.237721   68418 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:29:58.243500   68418 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:00.249577   68418 pod_ready.go:103] pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:02.250329   68418 pod_ready.go:103] pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:04.750635   68418 pod_ready.go:103] pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:06.751559   68418 pod_ready.go:93] pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:06.751583   68418 pod_ready.go:82] duration metric: took 8.508053751s for pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:06.751594   68418 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:08.757727   68418 pod_ready.go:103] pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:10.260326   68418 pod_ready.go:93] pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:10.260352   68418 pod_ready.go:82] duration metric: took 3.508751351s for pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.260388   68418 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.267041   68418 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:10.267071   68418 pod_ready.go:82] duration metric: took 6.67429ms for pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.267083   68418 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.773135   68418 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:10.773156   68418 pod_ready.go:82] duration metric: took 506.065053ms for pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.773166   68418 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-sxxfj" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.777890   68418 pod_ready.go:93] pod "kube-proxy-sxxfj" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:10.777910   68418 pod_ready.go:82] duration metric: took 4.738315ms for pod "kube-proxy-sxxfj" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.777918   68418 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.782610   68418 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:10.782634   68418 pod_ready.go:82] duration metric: took 4.708989ms for pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.782644   68418 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:12.789050   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:15.290635   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:17.290867   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:19.789502   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:21.789999   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:24.289487   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:26.789083   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:28.789955   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:30.790439   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:33.289188   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:35.289313   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:37.289903   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:39.788459   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:41.788633   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:43.788867   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:46.290002   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:48.789891   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:51.289334   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:53.788643   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:55.789983   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:58.288949   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:00.289478   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:02.290789   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:04.789722   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:07.289474   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:09.290183   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:11.790355   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:14.289284   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:16.289536   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:18.289606   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:20.789261   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:22.789463   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:25.290185   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:27.788643   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:29.788778   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:31.790285   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:34.288230   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:36.288784   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:38.289862   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:40.789317   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:43.289232   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:45.290400   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:47.788723   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:49.789327   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:52.289114   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:54.788895   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:56.788984   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:59.288473   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:01.789415   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:04.289328   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:06.289615   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:08.788879   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:10.790191   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:13.288885   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:15.789008   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:17.789191   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:19.789559   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:22.288958   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:24.290206   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:26.788241   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:28.789457   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:31.288929   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:33.789418   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:35.789932   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:38.288742   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:40.289667   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:42.789129   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:44.790115   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:47.289310   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:49.289558   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:51.789255   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:54.289586   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:56.788032   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:58.789012   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:01.289206   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:03.788129   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:05.788915   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:07.790124   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:10.289206   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:12.789314   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:14.789636   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:17.288443   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:19.289524   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:21.289650   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:23.789802   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:26.289735   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:28.788897   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:30.789339   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:33.289295   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:35.289664   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:37.789968   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:40.289657   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:42.789430   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:45.289320   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:47.789980   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:50.287836   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:52.289028   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:54.788936   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:56.789521   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:59.289778   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:01.788398   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:03.789045   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:05.789391   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:08.289322   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:10.783748   68418 pod_ready.go:82] duration metric: took 4m0.001085136s for pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace to be "Ready" ...
	E1001 20:34:10.783784   68418 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace to be "Ready" (will not retry!)
	I1001 20:34:10.783805   68418 pod_ready.go:39] duration metric: took 4m12.546072786s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:34:10.783831   68418 kubeadm.go:597] duration metric: took 4m20.686657254s to restartPrimaryControlPlane
	W1001 20:34:10.783895   68418 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1001 20:34:10.783926   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 20:34:36.981542   68418 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.197594945s)
	I1001 20:34:36.981628   68418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:34:37.005650   68418 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:34:37.017406   68418 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:34:37.031711   68418 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:34:37.031737   68418 kubeadm.go:157] found existing configuration files:
	
	I1001 20:34:37.031801   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1001 20:34:37.054028   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:34:37.054096   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:34:37.068277   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1001 20:34:37.099472   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:34:37.099558   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:34:37.109813   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1001 20:34:37.119548   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:34:37.119620   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:34:37.129522   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1001 20:34:37.138911   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:34:37.138971   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:34:37.149119   68418 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:34:37.193177   68418 kubeadm.go:310] W1001 20:34:37.161028    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:34:37.193935   68418 kubeadm.go:310] W1001 20:34:37.161888    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:34:37.305111   68418 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:34:45.582383   68418 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 20:34:45.582463   68418 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:34:45.582540   68418 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:34:45.582643   68418 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:34:45.582725   68418 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 20:34:45.582825   68418 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:34:45.584304   68418 out.go:235]   - Generating certificates and keys ...
	I1001 20:34:45.584409   68418 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:34:45.584488   68418 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:34:45.584584   68418 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 20:34:45.584666   68418 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 20:34:45.584757   68418 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 20:34:45.584833   68418 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 20:34:45.584926   68418 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 20:34:45.585014   68418 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 20:34:45.585109   68418 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 20:34:45.585227   68418 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 20:34:45.585291   68418 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 20:34:45.585364   68418 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:34:45.585438   68418 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:34:45.585527   68418 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 20:34:45.585609   68418 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:34:45.585710   68418 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:34:45.585792   68418 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:34:45.585901   68418 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:34:45.585990   68418 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:34:45.587360   68418 out.go:235]   - Booting up control plane ...
	I1001 20:34:45.587448   68418 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:34:45.587539   68418 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:34:45.587626   68418 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:34:45.587751   68418 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:34:45.587885   68418 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:34:45.587960   68418 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:34:45.588118   68418 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 20:34:45.588256   68418 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 20:34:45.588341   68418 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002411615s
	I1001 20:34:45.588453   68418 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 20:34:45.588531   68418 kubeadm.go:310] [api-check] The API server is healthy after 5.002438287s
	I1001 20:34:45.588653   68418 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 20:34:45.588821   68418 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 20:34:45.588925   68418 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 20:34:45.589184   68418 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-878552 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 20:34:45.589272   68418 kubeadm.go:310] [bootstrap-token] Using token: p1d60n.4sgx895mi22cjpsf
	I1001 20:34:45.590444   68418 out.go:235]   - Configuring RBAC rules ...
	I1001 20:34:45.590599   68418 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 20:34:45.590726   68418 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 20:34:45.590923   68418 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 20:34:45.591071   68418 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 20:34:45.591222   68418 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 20:34:45.591301   68418 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 20:34:45.591402   68418 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 20:34:45.591441   68418 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 20:34:45.591485   68418 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 20:34:45.591492   68418 kubeadm.go:310] 
	I1001 20:34:45.591540   68418 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 20:34:45.591548   68418 kubeadm.go:310] 
	I1001 20:34:45.591614   68418 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 20:34:45.591619   68418 kubeadm.go:310] 
	I1001 20:34:45.591644   68418 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 20:34:45.591694   68418 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 20:34:45.591750   68418 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 20:34:45.591756   68418 kubeadm.go:310] 
	I1001 20:34:45.591812   68418 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 20:34:45.591818   68418 kubeadm.go:310] 
	I1001 20:34:45.591857   68418 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 20:34:45.591865   68418 kubeadm.go:310] 
	I1001 20:34:45.591909   68418 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 20:34:45.591990   68418 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 20:34:45.592063   68418 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 20:34:45.592071   68418 kubeadm.go:310] 
	I1001 20:34:45.592195   68418 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 20:34:45.592313   68418 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 20:34:45.592322   68418 kubeadm.go:310] 
	I1001 20:34:45.592432   68418 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token p1d60n.4sgx895mi22cjpsf \
	I1001 20:34:45.592579   68418 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 \
	I1001 20:34:45.592611   68418 kubeadm.go:310] 	--control-plane 
	I1001 20:34:45.592620   68418 kubeadm.go:310] 
	I1001 20:34:45.592734   68418 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 20:34:45.592743   68418 kubeadm.go:310] 
	I1001 20:34:45.592858   68418 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token p1d60n.4sgx895mi22cjpsf \
	I1001 20:34:45.592982   68418 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 
	I1001 20:34:45.592997   68418 cni.go:84] Creating CNI manager for ""
	I1001 20:34:45.593009   68418 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:34:45.594419   68418 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 20:34:45.595548   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 20:34:45.607351   68418 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 20:34:45.627315   68418 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 20:34:45.627399   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:45.627424   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-878552 minikube.k8s.io/updated_at=2024_10_01T20_34_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=default-k8s-diff-port-878552 minikube.k8s.io/primary=true
	I1001 20:34:45.843925   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:45.843977   68418 ops.go:34] apiserver oom_adj: -16
	I1001 20:34:46.344009   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:46.844786   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:47.344138   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:47.844582   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:48.344478   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:48.844802   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:49.344790   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:49.844113   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:49.980078   68418 kubeadm.go:1113] duration metric: took 4.352743528s to wait for elevateKubeSystemPrivileges
	I1001 20:34:49.980127   68418 kubeadm.go:394] duration metric: took 4m59.934297539s to StartCluster
	I1001 20:34:49.980151   68418 settings.go:142] acquiring lock: {Name:mkeb3fe64ed992373f048aef24eb0d675bcab60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:34:49.980237   68418 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:34:49.982156   68418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/kubeconfig: {Name:mk7ef19d546928001abf2478a3abe1d17765a591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:34:49.982450   68418 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.4 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 20:34:49.982531   68418 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 20:34:49.982651   68418 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-878552"
	I1001 20:34:49.982674   68418 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-878552"
	I1001 20:34:49.982673   68418 config.go:182] Loaded profile config "default-k8s-diff-port-878552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W1001 20:34:49.982682   68418 addons.go:243] addon storage-provisioner should already be in state true
	I1001 20:34:49.982722   68418 host.go:66] Checking if "default-k8s-diff-port-878552" exists ...
	I1001 20:34:49.982727   68418 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-878552"
	I1001 20:34:49.982743   68418 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-878552"
	I1001 20:34:49.982817   68418 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-878552"
	I1001 20:34:49.982861   68418 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-878552"
	W1001 20:34:49.982871   68418 addons.go:243] addon metrics-server should already be in state true
	I1001 20:34:49.982899   68418 host.go:66] Checking if "default-k8s-diff-port-878552" exists ...
	I1001 20:34:49.983158   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:49.983157   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:49.983202   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:49.983222   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:49.983301   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:49.983360   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:49.983825   68418 out.go:177] * Verifying Kubernetes components...
	I1001 20:34:49.985618   68418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:34:50.000925   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33641
	I1001 20:34:50.001031   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40311
	I1001 20:34:50.001469   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.001518   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.002031   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.002046   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.002084   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.002096   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.002510   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.002698   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.003148   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:50.003188   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:50.003432   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33083
	I1001 20:34:50.003813   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:50.003845   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.003858   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:50.004438   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.004462   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.004823   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.005017   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:34:50.009397   68418 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-878552"
	W1001 20:34:50.009420   68418 addons.go:243] addon default-storageclass should already be in state true
	I1001 20:34:50.009449   68418 host.go:66] Checking if "default-k8s-diff-port-878552" exists ...
	I1001 20:34:50.009886   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:50.009937   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:50.025234   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42543
	I1001 20:34:50.025892   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.026556   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.026583   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.027217   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.027484   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:34:50.029351   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42743
	I1001 20:34:50.029576   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:34:50.029996   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.030498   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.030520   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.030634   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45211
	I1001 20:34:50.030843   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.031078   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:34:50.031171   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.031283   68418 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:34:50.031683   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.031706   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.032061   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.032524   68418 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:34:50.032542   68418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 20:34:50.032560   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:34:50.032650   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:50.032683   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:50.033489   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:34:50.034928   68418 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1001 20:34:50.036629   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.036714   68418 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1001 20:34:50.036728   68418 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1001 20:34:50.036757   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:34:50.037000   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:34:50.037020   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.037303   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:34:50.037502   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:34:50.037697   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:34:50.037858   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:34:50.040023   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.040406   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:34:50.040428   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.040637   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:34:50.040843   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:34:50.041031   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:34:50.041156   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:34:50.050069   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34785
	I1001 20:34:50.050601   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.051079   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.051098   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.051460   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.051601   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:34:50.054072   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:34:50.054308   68418 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 20:34:50.054324   68418 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 20:34:50.054344   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:34:50.057697   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.058329   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:34:50.058386   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.058519   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:34:50.058781   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:34:50.059047   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:34:50.059192   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:34:50.228332   68418 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:34:50.245991   68418 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-878552" to be "Ready" ...
	I1001 20:34:50.255784   68418 node_ready.go:49] node "default-k8s-diff-port-878552" has status "Ready":"True"
	I1001 20:34:50.255822   68418 node_ready.go:38] duration metric: took 9.789404ms for node "default-k8s-diff-port-878552" to be "Ready" ...
	I1001 20:34:50.255836   68418 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:34:50.262258   68418 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8xth8" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:50.409170   68418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 20:34:50.412846   68418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:34:50.423375   68418 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1001 20:34:50.423404   68418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1001 20:34:50.476160   68418 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1001 20:34:50.476192   68418 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1001 20:34:50.510810   68418 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 20:34:50.510840   68418 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1001 20:34:50.570025   68418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 20:34:50.783367   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:50.783390   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:50.783748   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Closing plugin on server side
	I1001 20:34:50.783761   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:50.783773   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:50.783786   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:50.783794   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:50.783980   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:50.783993   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:50.783999   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Closing plugin on server side
	I1001 20:34:50.795782   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:50.795802   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:50.796093   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:50.796114   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:51.424974   68418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.012087585s)
	I1001 20:34:51.425090   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:51.425107   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:51.425376   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:51.425413   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:51.425426   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:51.425440   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:51.425671   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Closing plugin on server side
	I1001 20:34:51.425723   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:51.425743   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:51.713898   68418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.143834875s)
	I1001 20:34:51.713954   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:51.713969   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:51.714336   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:51.714375   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:51.714380   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Closing plugin on server side
	I1001 20:34:51.714385   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:51.714487   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:51.714762   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:51.714779   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:51.714798   68418 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-878552"
	I1001 20:34:51.716414   68418 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1001 20:34:51.717866   68418 addons.go:510] duration metric: took 1.735348103s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1001 20:34:52.268955   68418 pod_ready.go:103] pod "coredns-7c65d6cfc9-8xth8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:54.769610   68418 pod_ready.go:93] pod "coredns-7c65d6cfc9-8xth8" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:54.769633   68418 pod_ready.go:82] duration metric: took 4.507339793s for pod "coredns-7c65d6cfc9-8xth8" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:54.769642   68418 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p7wbg" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:56.775610   68418 pod_ready.go:103] pod "coredns-7c65d6cfc9-p7wbg" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:57.777422   68418 pod_ready.go:93] pod "coredns-7c65d6cfc9-p7wbg" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:57.777445   68418 pod_ready.go:82] duration metric: took 3.007796462s for pod "coredns-7c65d6cfc9-p7wbg" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.777455   68418 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.783103   68418 pod_ready.go:93] pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:57.783124   68418 pod_ready.go:82] duration metric: took 5.664052ms for pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.783135   68418 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.788028   68418 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:57.788052   68418 pod_ready.go:82] duration metric: took 4.910566ms for pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.788064   68418 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.792321   68418 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:57.792348   68418 pod_ready.go:82] duration metric: took 4.274793ms for pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.792379   68418 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-272ln" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.797759   68418 pod_ready.go:93] pod "kube-proxy-272ln" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:57.797782   68418 pod_ready.go:82] duration metric: took 5.395909ms for pod "kube-proxy-272ln" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.797792   68418 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:58.173750   68418 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:58.173783   68418 pod_ready.go:82] duration metric: took 375.98387ms for pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:58.173793   68418 pod_ready.go:39] duration metric: took 7.917945016s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:34:58.173812   68418 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:34:58.173878   68418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:34:58.188649   68418 api_server.go:72] duration metric: took 8.206165908s to wait for apiserver process to appear ...
	I1001 20:34:58.188676   68418 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:34:58.188697   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:34:58.193752   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 200:
	ok
	I1001 20:34:58.194629   68418 api_server.go:141] control plane version: v1.31.1
	I1001 20:34:58.194646   68418 api_server.go:131] duration metric: took 5.963942ms to wait for apiserver health ...
	I1001 20:34:58.194653   68418 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:34:58.378081   68418 system_pods.go:59] 9 kube-system pods found
	I1001 20:34:58.378110   68418 system_pods.go:61] "coredns-7c65d6cfc9-8xth8" [4a6d614d-f16c-46fb-add5-610ac5895e1c] Running
	I1001 20:34:58.378115   68418 system_pods.go:61] "coredns-7c65d6cfc9-p7wbg" [13fab587-7dc4-41fc-a74c-47372725886d] Running
	I1001 20:34:58.378121   68418 system_pods.go:61] "etcd-default-k8s-diff-port-878552" [56a25509-d233-470d-888a-cf87475bf51b] Running
	I1001 20:34:58.378124   68418 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-878552" [d74bbc5a-6944-4e7b-a175-59b8ce58b359] Running
	I1001 20:34:58.378128   68418 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-878552" [5f2b8294-3146-4996-8a92-69ae08803d55] Running
	I1001 20:34:58.378131   68418 system_pods.go:61] "kube-proxy-272ln" [9f2e367f-34c7-4117-bd8e-62b5aa58c7b5] Running
	I1001 20:34:58.378134   68418 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-878552" [91e886e5-8452-4fe2-8be8-7705eeed5073] Running
	I1001 20:34:58.378140   68418 system_pods.go:61] "metrics-server-6867b74b74-75m4s" [c8eb4eb3-ea7f-4a88-8c1f-e3331f44ae53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:34:58.378143   68418 system_pods.go:61] "storage-provisioner" [bfc9ed28-f04b-4e57-b8c0-f41849e1fc25] Running
	I1001 20:34:58.378151   68418 system_pods.go:74] duration metric: took 183.491966ms to wait for pod list to return data ...
	I1001 20:34:58.378157   68418 default_sa.go:34] waiting for default service account to be created ...
	I1001 20:34:58.574257   68418 default_sa.go:45] found service account: "default"
	I1001 20:34:58.574282   68418 default_sa.go:55] duration metric: took 196.119399ms for default service account to be created ...
	I1001 20:34:58.574290   68418 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 20:34:58.776341   68418 system_pods.go:86] 9 kube-system pods found
	I1001 20:34:58.776395   68418 system_pods.go:89] "coredns-7c65d6cfc9-8xth8" [4a6d614d-f16c-46fb-add5-610ac5895e1c] Running
	I1001 20:34:58.776406   68418 system_pods.go:89] "coredns-7c65d6cfc9-p7wbg" [13fab587-7dc4-41fc-a74c-47372725886d] Running
	I1001 20:34:58.776420   68418 system_pods.go:89] "etcd-default-k8s-diff-port-878552" [56a25509-d233-470d-888a-cf87475bf51b] Running
	I1001 20:34:58.776428   68418 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-878552" [d74bbc5a-6944-4e7b-a175-59b8ce58b359] Running
	I1001 20:34:58.776438   68418 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-878552" [5f2b8294-3146-4996-8a92-69ae08803d55] Running
	I1001 20:34:58.776443   68418 system_pods.go:89] "kube-proxy-272ln" [9f2e367f-34c7-4117-bd8e-62b5aa58c7b5] Running
	I1001 20:34:58.776449   68418 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-878552" [91e886e5-8452-4fe2-8be8-7705eeed5073] Running
	I1001 20:34:58.776456   68418 system_pods.go:89] "metrics-server-6867b74b74-75m4s" [c8eb4eb3-ea7f-4a88-8c1f-e3331f44ae53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:34:58.776463   68418 system_pods.go:89] "storage-provisioner" [bfc9ed28-f04b-4e57-b8c0-f41849e1fc25] Running
	I1001 20:34:58.776471   68418 system_pods.go:126] duration metric: took 202.174994ms to wait for k8s-apps to be running ...
	I1001 20:34:58.776481   68418 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 20:34:58.776526   68418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:34:58.791729   68418 system_svc.go:56] duration metric: took 15.241394ms WaitForService to wait for kubelet
	I1001 20:34:58.791758   68418 kubeadm.go:582] duration metric: took 8.809278003s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:34:58.791774   68418 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:34:58.976076   68418 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:34:58.976102   68418 node_conditions.go:123] node cpu capacity is 2
	I1001 20:34:58.976115   68418 node_conditions.go:105] duration metric: took 184.336121ms to run NodePressure ...
	I1001 20:34:58.976127   68418 start.go:241] waiting for startup goroutines ...
	I1001 20:34:58.976136   68418 start.go:246] waiting for cluster config update ...
	I1001 20:34:58.976149   68418 start.go:255] writing updated cluster config ...
	I1001 20:34:58.976450   68418 ssh_runner.go:195] Run: rm -f paused
	I1001 20:34:59.026367   68418 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 20:34:59.029055   68418 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-878552" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 20:35:29 embed-certs-106982 crio[716]: time="2024-10-01 20:35:29.904535933Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814929904388412,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4242f025-98c2-499e-bfb0-ede59df3ee00 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:35:29 embed-certs-106982 crio[716]: time="2024-10-01 20:35:29.905399091Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7bfee705-bf91-4a1c-a13a-9b70e2bce3a4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:35:29 embed-certs-106982 crio[716]: time="2024-10-01 20:35:29.905500581Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7bfee705-bf91-4a1c-a13a-9b70e2bce3a4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:35:29 embed-certs-106982 crio[716]: time="2024-10-01 20:35:29.905816622Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:49956e29a325cfda62a0b0ddf30ac17398312cc9fbef9933a45a20dd90e9d7f4,PodSandboxId:e7bd7a99780ccbbee9f2f3eadc66d382e572973514dcbb35c1d84129b78e4764,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814382711276531,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aaab1f2-8361-46c6-88be-ed9004628715,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed6a9cbaccaf595eaf1508b7e27e572fc3d6bb42981beec1a8ba77ddc80490e,PodSandboxId:bc15115378909caaa1b9f904887679d07d8298120f67307b523c1559feafb4de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814381604651449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5ms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 652fcc3d-ae12-4e11-b212-8891c1c05701,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bca7dc0957012fce7669cf95da2da702d227ffbdd5ed0171872b58719c908e8d,PodSandboxId:d7fd735c09a752b6ed7dd40c2af00729c730e4363260e62626e05dc9d5ae7c23,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814381450670970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wfdwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
174cd48-6855-4813-9ecd-3b3a82386720,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b21a0cbb3a52290e36564c48071015faf99a296f89f200bcfa148a3c95d76ca,PodSandboxId:1bddc641693b85ab307065a31ca507e1e70676cfc4d85b42faaa6ebb70db7376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727814381081947263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fjnvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 728b1b90-5961-45e9-9818-8fc6f6db1634,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a99aae7d75e2fa9c02da40f811e88ccdfb98330c078417c46dda7892065ec0,PodSandboxId:d59aaf738c583d020c40193e07e23efc8334d9ec12fb24378780b4bc1a11f9e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727814370131874920,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20c73fd156c9e4f64240f6fa41d9888d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71bb11a38d4d3cee348c612978d120ef0b43650039509e95f793c5c224aab74,PodSandboxId:352152c2449c88805800055ddf9aa37ab049449f10b9842b68bf647ec87d184c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814370109737135,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a509b4a275e96f7e1fb9a5675e98f42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49dc9b41de775e3af100f0fba4f1a15993b61c34b27af01f776f85419b41a10e,PodSandboxId:16096076c09d5cc2c26167d746eb591295f0cfc58d72654c3f49fcdd317ac88d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727814370087503165,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd4bc15ab11faab9c227d13239baa161,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58bdf5137deb5d4626b9591e8a3ecd2e91299386f1e65875d27005f5c7848a16,PodSandboxId:62be514e87c68085f9432f46d952a9af9d16e56a50a769cea308ec4f39d0fb00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727814370034180117,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a52129df49edfb54a3732fda1a5b47c2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc8e0b8fa08173938da2c5f8e2005d704509fdce496896cf1760962e4b7c749,PodSandboxId:0c60aee3aed0249253019fd569881f45bff179141c40c212cf45ba441f80acfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727814083077444753,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd4bc15ab11faab9c227d13239baa161,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7bfee705-bf91-4a1c-a13a-9b70e2bce3a4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:35:29 embed-certs-106982 crio[716]: time="2024-10-01 20:35:29.942938794Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e7350f6-8237-4f7f-ac51-1d64ba0a4134 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:35:29 embed-certs-106982 crio[716]: time="2024-10-01 20:35:29.943076289Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e7350f6-8237-4f7f-ac51-1d64ba0a4134 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:35:29 embed-certs-106982 crio[716]: time="2024-10-01 20:35:29.944224698Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c4277e03-eac0-41e8-85b0-8852400063fa name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:35:29 embed-certs-106982 crio[716]: time="2024-10-01 20:35:29.944626539Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814929944603199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c4277e03-eac0-41e8-85b0-8852400063fa name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:35:29 embed-certs-106982 crio[716]: time="2024-10-01 20:35:29.945251334Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff325ef4-e826-4685-96ce-765396da0db9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:35:29 embed-certs-106982 crio[716]: time="2024-10-01 20:35:29.945319240Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff325ef4-e826-4685-96ce-765396da0db9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:35:29 embed-certs-106982 crio[716]: time="2024-10-01 20:35:29.945509639Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:49956e29a325cfda62a0b0ddf30ac17398312cc9fbef9933a45a20dd90e9d7f4,PodSandboxId:e7bd7a99780ccbbee9f2f3eadc66d382e572973514dcbb35c1d84129b78e4764,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814382711276531,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aaab1f2-8361-46c6-88be-ed9004628715,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed6a9cbaccaf595eaf1508b7e27e572fc3d6bb42981beec1a8ba77ddc80490e,PodSandboxId:bc15115378909caaa1b9f904887679d07d8298120f67307b523c1559feafb4de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814381604651449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5ms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 652fcc3d-ae12-4e11-b212-8891c1c05701,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bca7dc0957012fce7669cf95da2da702d227ffbdd5ed0171872b58719c908e8d,PodSandboxId:d7fd735c09a752b6ed7dd40c2af00729c730e4363260e62626e05dc9d5ae7c23,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814381450670970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wfdwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
174cd48-6855-4813-9ecd-3b3a82386720,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b21a0cbb3a52290e36564c48071015faf99a296f89f200bcfa148a3c95d76ca,PodSandboxId:1bddc641693b85ab307065a31ca507e1e70676cfc4d85b42faaa6ebb70db7376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727814381081947263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fjnvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 728b1b90-5961-45e9-9818-8fc6f6db1634,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a99aae7d75e2fa9c02da40f811e88ccdfb98330c078417c46dda7892065ec0,PodSandboxId:d59aaf738c583d020c40193e07e23efc8334d9ec12fb24378780b4bc1a11f9e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727814370131874920,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20c73fd156c9e4f64240f6fa41d9888d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71bb11a38d4d3cee348c612978d120ef0b43650039509e95f793c5c224aab74,PodSandboxId:352152c2449c88805800055ddf9aa37ab049449f10b9842b68bf647ec87d184c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814370109737135,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a509b4a275e96f7e1fb9a5675e98f42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49dc9b41de775e3af100f0fba4f1a15993b61c34b27af01f776f85419b41a10e,PodSandboxId:16096076c09d5cc2c26167d746eb591295f0cfc58d72654c3f49fcdd317ac88d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727814370087503165,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd4bc15ab11faab9c227d13239baa161,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58bdf5137deb5d4626b9591e8a3ecd2e91299386f1e65875d27005f5c7848a16,PodSandboxId:62be514e87c68085f9432f46d952a9af9d16e56a50a769cea308ec4f39d0fb00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727814370034180117,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a52129df49edfb54a3732fda1a5b47c2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc8e0b8fa08173938da2c5f8e2005d704509fdce496896cf1760962e4b7c749,PodSandboxId:0c60aee3aed0249253019fd569881f45bff179141c40c212cf45ba441f80acfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727814083077444753,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd4bc15ab11faab9c227d13239baa161,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff325ef4-e826-4685-96ce-765396da0db9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:35:29 embed-certs-106982 crio[716]: time="2024-10-01 20:35:29.982134545Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=055121f5-75e4-4da2-a5f3-de26d9b90b3e name=/runtime.v1.RuntimeService/Version
	Oct 01 20:35:29 embed-certs-106982 crio[716]: time="2024-10-01 20:35:29.982248455Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=055121f5-75e4-4da2-a5f3-de26d9b90b3e name=/runtime.v1.RuntimeService/Version
	Oct 01 20:35:29 embed-certs-106982 crio[716]: time="2024-10-01 20:35:29.983301710Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a93a480b-b98a-4200-9b12-147c0366dc38 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:35:29 embed-certs-106982 crio[716]: time="2024-10-01 20:35:29.983727898Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814929983705499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a93a480b-b98a-4200-9b12-147c0366dc38 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:35:29 embed-certs-106982 crio[716]: time="2024-10-01 20:35:29.984298753Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46fc9e65-bd21-40cc-9743-0f61c056cecb name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:35:29 embed-certs-106982 crio[716]: time="2024-10-01 20:35:29.984378705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46fc9e65-bd21-40cc-9743-0f61c056cecb name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:35:29 embed-certs-106982 crio[716]: time="2024-10-01 20:35:29.984572479Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:49956e29a325cfda62a0b0ddf30ac17398312cc9fbef9933a45a20dd90e9d7f4,PodSandboxId:e7bd7a99780ccbbee9f2f3eadc66d382e572973514dcbb35c1d84129b78e4764,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814382711276531,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aaab1f2-8361-46c6-88be-ed9004628715,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed6a9cbaccaf595eaf1508b7e27e572fc3d6bb42981beec1a8ba77ddc80490e,PodSandboxId:bc15115378909caaa1b9f904887679d07d8298120f67307b523c1559feafb4de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814381604651449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5ms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 652fcc3d-ae12-4e11-b212-8891c1c05701,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bca7dc0957012fce7669cf95da2da702d227ffbdd5ed0171872b58719c908e8d,PodSandboxId:d7fd735c09a752b6ed7dd40c2af00729c730e4363260e62626e05dc9d5ae7c23,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814381450670970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wfdwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
174cd48-6855-4813-9ecd-3b3a82386720,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b21a0cbb3a52290e36564c48071015faf99a296f89f200bcfa148a3c95d76ca,PodSandboxId:1bddc641693b85ab307065a31ca507e1e70676cfc4d85b42faaa6ebb70db7376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727814381081947263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fjnvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 728b1b90-5961-45e9-9818-8fc6f6db1634,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a99aae7d75e2fa9c02da40f811e88ccdfb98330c078417c46dda7892065ec0,PodSandboxId:d59aaf738c583d020c40193e07e23efc8334d9ec12fb24378780b4bc1a11f9e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727814370131874920,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20c73fd156c9e4f64240f6fa41d9888d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71bb11a38d4d3cee348c612978d120ef0b43650039509e95f793c5c224aab74,PodSandboxId:352152c2449c88805800055ddf9aa37ab049449f10b9842b68bf647ec87d184c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814370109737135,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a509b4a275e96f7e1fb9a5675e98f42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49dc9b41de775e3af100f0fba4f1a15993b61c34b27af01f776f85419b41a10e,PodSandboxId:16096076c09d5cc2c26167d746eb591295f0cfc58d72654c3f49fcdd317ac88d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727814370087503165,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd4bc15ab11faab9c227d13239baa161,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58bdf5137deb5d4626b9591e8a3ecd2e91299386f1e65875d27005f5c7848a16,PodSandboxId:62be514e87c68085f9432f46d952a9af9d16e56a50a769cea308ec4f39d0fb00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727814370034180117,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a52129df49edfb54a3732fda1a5b47c2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc8e0b8fa08173938da2c5f8e2005d704509fdce496896cf1760962e4b7c749,PodSandboxId:0c60aee3aed0249253019fd569881f45bff179141c40c212cf45ba441f80acfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727814083077444753,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd4bc15ab11faab9c227d13239baa161,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46fc9e65-bd21-40cc-9743-0f61c056cecb name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:35:30 embed-certs-106982 crio[716]: time="2024-10-01 20:35:30.018407205Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5a92926a-1f2e-4c97-88e6-29849cff40ec name=/runtime.v1.RuntimeService/Version
	Oct 01 20:35:30 embed-certs-106982 crio[716]: time="2024-10-01 20:35:30.018503379Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5a92926a-1f2e-4c97-88e6-29849cff40ec name=/runtime.v1.RuntimeService/Version
	Oct 01 20:35:30 embed-certs-106982 crio[716]: time="2024-10-01 20:35:30.019779599Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8568c9af-1d6a-4b29-957e-852adb2c49fd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:35:30 embed-certs-106982 crio[716]: time="2024-10-01 20:35:30.020288004Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814930020261253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8568c9af-1d6a-4b29-957e-852adb2c49fd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:35:30 embed-certs-106982 crio[716]: time="2024-10-01 20:35:30.020842214Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f39d1b11-4e2e-4978-be2d-b901147100a2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:35:30 embed-certs-106982 crio[716]: time="2024-10-01 20:35:30.020911160Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f39d1b11-4e2e-4978-be2d-b901147100a2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:35:30 embed-certs-106982 crio[716]: time="2024-10-01 20:35:30.021389048Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:49956e29a325cfda62a0b0ddf30ac17398312cc9fbef9933a45a20dd90e9d7f4,PodSandboxId:e7bd7a99780ccbbee9f2f3eadc66d382e572973514dcbb35c1d84129b78e4764,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814382711276531,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aaab1f2-8361-46c6-88be-ed9004628715,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed6a9cbaccaf595eaf1508b7e27e572fc3d6bb42981beec1a8ba77ddc80490e,PodSandboxId:bc15115378909caaa1b9f904887679d07d8298120f67307b523c1559feafb4de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814381604651449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5ms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 652fcc3d-ae12-4e11-b212-8891c1c05701,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bca7dc0957012fce7669cf95da2da702d227ffbdd5ed0171872b58719c908e8d,PodSandboxId:d7fd735c09a752b6ed7dd40c2af00729c730e4363260e62626e05dc9d5ae7c23,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814381450670970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wfdwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
174cd48-6855-4813-9ecd-3b3a82386720,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b21a0cbb3a52290e36564c48071015faf99a296f89f200bcfa148a3c95d76ca,PodSandboxId:1bddc641693b85ab307065a31ca507e1e70676cfc4d85b42faaa6ebb70db7376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727814381081947263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fjnvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 728b1b90-5961-45e9-9818-8fc6f6db1634,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a99aae7d75e2fa9c02da40f811e88ccdfb98330c078417c46dda7892065ec0,PodSandboxId:d59aaf738c583d020c40193e07e23efc8334d9ec12fb24378780b4bc1a11f9e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727814370131874920,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20c73fd156c9e4f64240f6fa41d9888d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71bb11a38d4d3cee348c612978d120ef0b43650039509e95f793c5c224aab74,PodSandboxId:352152c2449c88805800055ddf9aa37ab049449f10b9842b68bf647ec87d184c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814370109737135,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a509b4a275e96f7e1fb9a5675e98f42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49dc9b41de775e3af100f0fba4f1a15993b61c34b27af01f776f85419b41a10e,PodSandboxId:16096076c09d5cc2c26167d746eb591295f0cfc58d72654c3f49fcdd317ac88d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727814370087503165,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd4bc15ab11faab9c227d13239baa161,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58bdf5137deb5d4626b9591e8a3ecd2e91299386f1e65875d27005f5c7848a16,PodSandboxId:62be514e87c68085f9432f46d952a9af9d16e56a50a769cea308ec4f39d0fb00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727814370034180117,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a52129df49edfb54a3732fda1a5b47c2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc8e0b8fa08173938da2c5f8e2005d704509fdce496896cf1760962e4b7c749,PodSandboxId:0c60aee3aed0249253019fd569881f45bff179141c40c212cf45ba441f80acfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727814083077444753,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd4bc15ab11faab9c227d13239baa161,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f39d1b11-4e2e-4978-be2d-b901147100a2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	49956e29a325c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   e7bd7a99780cc       storage-provisioner
	bed6a9cbaccaf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   bc15115378909       coredns-7c65d6cfc9-rq5ms
	bca7dc0957012       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   d7fd735c09a75       coredns-7c65d6cfc9-wfdwp
	7b21a0cbb3a52       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   1bddc641693b8       kube-proxy-fjnvc
	f0a99aae7d75e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   d59aaf738c583       etcd-embed-certs-106982
	b71bb11a38d4d       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   352152c2449c8       kube-scheduler-embed-certs-106982
	49dc9b41de775       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   16096076c09d5       kube-apiserver-embed-certs-106982
	58bdf5137deb5       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   62be514e87c68       kube-controller-manager-embed-certs-106982
	bfc8e0b8fa081       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   0c60aee3aed02       kube-apiserver-embed-certs-106982
	
	
	==> coredns [bca7dc0957012fce7669cf95da2da702d227ffbdd5ed0171872b58719c908e8d] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [bed6a9cbaccaf595eaf1508b7e27e572fc3d6bb42981beec1a8ba77ddc80490e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-106982
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-106982
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=embed-certs-106982
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T20_26_16_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 20:26:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-106982
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 20:35:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 20:31:32 +0000   Tue, 01 Oct 2024 20:26:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 20:31:32 +0000   Tue, 01 Oct 2024 20:26:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 20:31:32 +0000   Tue, 01 Oct 2024 20:26:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 20:31:32 +0000   Tue, 01 Oct 2024 20:26:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    embed-certs-106982
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd30dfe38c9a4961913c765d396796b3
	  System UUID:                cd30dfe3-8c9a-4961-913c-765d396796b3
	  Boot ID:                    774f8b5c-9259-48db-98ed-09e0764a8164
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-rq5ms                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 coredns-7c65d6cfc9-wfdwp                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 etcd-embed-certs-106982                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m15s
	  kube-system                 kube-apiserver-embed-certs-106982             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-controller-manager-embed-certs-106982    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-proxy-fjnvc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	  kube-system                 kube-scheduler-embed-certs-106982             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 metrics-server-6867b74b74-z27sl               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m8s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m8s   kube-proxy       
	  Normal  Starting                 9m15s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m15s  kubelet          Node embed-certs-106982 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m15s  kubelet          Node embed-certs-106982 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s  kubelet          Node embed-certs-106982 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m11s  node-controller  Node embed-certs-106982 event: Registered Node embed-certs-106982 in Controller
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056448] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039315] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Oct 1 20:21] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.005356] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.347801] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.228031] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.147610] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.208860] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.162576] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.346586] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[  +4.553015] systemd-fstab-generator[797]: Ignoring "noauto" option for root device
	[  +0.069552] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.380245] systemd-fstab-generator[920]: Ignoring "noauto" option for root device
	[  +5.676723] kauditd_printk_skb: 97 callbacks suppressed
	[  +8.106833] kauditd_printk_skb: 85 callbacks suppressed
	[Oct 1 20:26] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.380165] systemd-fstab-generator[2572]: Ignoring "noauto" option for root device
	[  +4.443944] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.925941] systemd-fstab-generator[2894]: Ignoring "noauto" option for root device
	[  +5.411887] systemd-fstab-generator[3028]: Ignoring "noauto" option for root device
	[  +0.038526] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.343296] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [f0a99aae7d75e2fa9c02da40f811e88ccdfb98330c078417c46dda7892065ec0] <==
	{"level":"info","ts":"2024-10-01T20:26:10.601378Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"28dd8e6bbca035f5","initial-advertise-peer-urls":["https://192.168.39.203:2380"],"listen-peer-urls":["https://192.168.39.203:2380"],"advertise-client-urls":["https://192.168.39.203:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.203:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-01T20:26:10.603071Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-01T20:26:10.597160Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-10-01T20:26:10.603521Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-10-01T20:26:11.297091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-01T20:26:11.297206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-01T20:26:11.297252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 received MsgPreVoteResp from 28dd8e6bbca035f5 at term 1"}
	{"level":"info","ts":"2024-10-01T20:26:11.297281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became candidate at term 2"}
	{"level":"info","ts":"2024-10-01T20:26:11.297305Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 received MsgVoteResp from 28dd8e6bbca035f5 at term 2"}
	{"level":"info","ts":"2024-10-01T20:26:11.297332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became leader at term 2"}
	{"level":"info","ts":"2024-10-01T20:26:11.297357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 28dd8e6bbca035f5 elected leader 28dd8e6bbca035f5 at term 2"}
	{"level":"info","ts":"2024-10-01T20:26:11.302131Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T20:26:11.305202Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"28dd8e6bbca035f5","local-member-attributes":"{Name:embed-certs-106982 ClientURLs:[https://192.168.39.203:2379]}","request-path":"/0/members/28dd8e6bbca035f5/attributes","cluster-id":"3b4a61fb6ca7242f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-01T20:26:11.305328Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T20:26:11.305640Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T20:26:11.305765Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-01T20:26:11.305790Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-01T20:26:11.306168Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3b4a61fb6ca7242f","local-member-id":"28dd8e6bbca035f5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T20:26:11.308147Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T20:26:11.308213Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T20:26:11.306433Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T20:26:11.309332Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.203:2379"}
	{"level":"info","ts":"2024-10-01T20:26:11.306878Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T20:26:11.318810Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-01T20:29:52.146102Z","caller":"traceutil/trace.go:171","msg":"trace[1832817733] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"131.567452ms","start":"2024-10-01T20:29:52.014428Z","end":"2024-10-01T20:29:52.145996Z","steps":["trace[1832817733] 'process raft request'  (duration: 131.463213ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:35:30 up 14 min,  0 users,  load average: 0.59, 0.31, 0.17
	Linux embed-certs-106982 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [49dc9b41de775e3af100f0fba4f1a15993b61c34b27af01f776f85419b41a10e] <==
	W1001 20:31:13.690179       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:31:13.690265       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1001 20:31:13.691489       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1001 20:31:13.691573       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1001 20:32:13.692374       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:32:13.692449       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1001 20:32:13.692493       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:32:13.692505       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1001 20:32:13.693673       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1001 20:32:13.693734       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1001 20:34:13.693858       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:34:13.694057       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1001 20:34:13.694097       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:34:13.694117       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1001 20:34:13.695288       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1001 20:34:13.695362       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [bfc8e0b8fa08173938da2c5f8e2005d704509fdce496896cf1760962e4b7c749] <==
	W1001 20:26:03.223186       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.279381       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.286972       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.286996       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.360873       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.369537       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.425926       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.428419       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.489437       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.514140       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.531560       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.578562       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.578658       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.592434       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.594899       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.618507       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.652718       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.731813       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.801757       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.810727       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.817923       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.909354       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.985372       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.988912       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:04.110814       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [58bdf5137deb5d4626b9591e8a3ecd2e91299386f1e65875d27005f5c7848a16] <==
	E1001 20:30:19.665386       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:30:20.114068       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:30:49.671861       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:30:50.125369       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:31:19.679594       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:31:20.134708       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1001 20:31:32.669551       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-106982"
	E1001 20:31:49.685856       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:31:50.142255       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:32:19.692234       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:32:20.151884       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1001 20:32:26.741262       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="342.153µs"
	I1001 20:32:41.741622       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="167.774µs"
	E1001 20:32:49.698111       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:32:50.159931       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:33:19.704770       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:33:20.168971       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:33:49.711075       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:33:50.176974       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:34:19.718312       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:34:20.185182       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:34:49.727666       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:34:50.194083       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:35:19.736223       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:35:20.201979       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [7b21a0cbb3a52290e36564c48071015faf99a296f89f200bcfa148a3c95d76ca] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 20:26:21.769105       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 20:26:21.796501       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.203"]
	E1001 20:26:21.796580       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 20:26:22.113375       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 20:26:22.113421       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 20:26:22.113449       1 server_linux.go:169] "Using iptables Proxier"
	I1001 20:26:22.150369       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 20:26:22.150669       1 server.go:483] "Version info" version="v1.31.1"
	I1001 20:26:22.150681       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 20:26:22.169837       1 config.go:199] "Starting service config controller"
	I1001 20:26:22.170004       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 20:26:22.170168       1 config.go:105] "Starting endpoint slice config controller"
	I1001 20:26:22.170175       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 20:26:22.174818       1 config.go:328] "Starting node config controller"
	I1001 20:26:22.174836       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 20:26:22.270173       1 shared_informer.go:320] Caches are synced for service config
	I1001 20:26:22.270249       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 20:26:22.275100       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b71bb11a38d4d3cee348c612978d120ef0b43650039509e95f793c5c224aab74] <==
	W1001 20:26:12.752358       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1001 20:26:12.752440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:26:12.752509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1001 20:26:12.752545       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 20:26:12.752632       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1001 20:26:12.752672       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1001 20:26:12.752699       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E1001 20:26:12.752670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 20:26:13.580480       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1001 20:26:13.580515       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:26:13.631812       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1001 20:26:13.631936       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 20:26:13.683188       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 20:26:13.683233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:26:13.739263       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 20:26:13.739376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1001 20:26:13.842723       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1001 20:26:13.842825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:26:13.956364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1001 20:26:13.956564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 20:26:13.995541       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 20:26:13.995964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:26:14.010240       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1001 20:26:14.010446       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1001 20:26:15.825099       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 01 20:34:15 embed-certs-106982 kubelet[2901]: E1001 20:34:15.942542    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814855942208904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:34:25 embed-certs-106982 kubelet[2901]: E1001 20:34:25.944818    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814865944266001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:34:25 embed-certs-106982 kubelet[2901]: E1001 20:34:25.945249    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814865944266001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:34:28 embed-certs-106982 kubelet[2901]: E1001 20:34:28.725407    2901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-z27sl" podUID="dd1b4cdc-5ffb-4214-96c9-569ef4f7ba09"
	Oct 01 20:34:35 embed-certs-106982 kubelet[2901]: E1001 20:34:35.947205    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814875946844971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:34:35 embed-certs-106982 kubelet[2901]: E1001 20:34:35.947588    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814875946844971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:34:40 embed-certs-106982 kubelet[2901]: E1001 20:34:40.726181    2901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-z27sl" podUID="dd1b4cdc-5ffb-4214-96c9-569ef4f7ba09"
	Oct 01 20:34:45 embed-certs-106982 kubelet[2901]: E1001 20:34:45.950105    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814885949472130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:34:45 embed-certs-106982 kubelet[2901]: E1001 20:34:45.950411    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814885949472130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:34:55 embed-certs-106982 kubelet[2901]: E1001 20:34:55.726661    2901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-z27sl" podUID="dd1b4cdc-5ffb-4214-96c9-569ef4f7ba09"
	Oct 01 20:34:55 embed-certs-106982 kubelet[2901]: E1001 20:34:55.952981    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814895952599426,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:34:55 embed-certs-106982 kubelet[2901]: E1001 20:34:55.953079    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814895952599426,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:35:05 embed-certs-106982 kubelet[2901]: E1001 20:35:05.954936    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814905954440395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:35:05 embed-certs-106982 kubelet[2901]: E1001 20:35:05.955389    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814905954440395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:35:08 embed-certs-106982 kubelet[2901]: E1001 20:35:08.725615    2901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-z27sl" podUID="dd1b4cdc-5ffb-4214-96c9-569ef4f7ba09"
	Oct 01 20:35:15 embed-certs-106982 kubelet[2901]: E1001 20:35:15.756823    2901 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 20:35:15 embed-certs-106982 kubelet[2901]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 20:35:15 embed-certs-106982 kubelet[2901]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 20:35:15 embed-certs-106982 kubelet[2901]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 20:35:15 embed-certs-106982 kubelet[2901]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 20:35:15 embed-certs-106982 kubelet[2901]: E1001 20:35:15.958502    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814915957751215,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:35:15 embed-certs-106982 kubelet[2901]: E1001 20:35:15.958556    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814915957751215,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:35:21 embed-certs-106982 kubelet[2901]: E1001 20:35:21.726108    2901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-z27sl" podUID="dd1b4cdc-5ffb-4214-96c9-569ef4f7ba09"
	Oct 01 20:35:25 embed-certs-106982 kubelet[2901]: E1001 20:35:25.960118    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814925959583193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:35:25 embed-certs-106982 kubelet[2901]: E1001 20:35:25.960552    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814925959583193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [49956e29a325cfda62a0b0ddf30ac17398312cc9fbef9933a45a20dd90e9d7f4] <==
	I1001 20:26:22.820801       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 20:26:22.845574       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 20:26:22.845724       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 20:26:22.876754       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 20:26:22.881463       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-106982_256f5572-54ed-4aa8-89f4-d87bbab7310b!
	I1001 20:26:22.880322       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b444fdf4-8983-4279-a53d-46efe0483287", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-106982_256f5572-54ed-4aa8-89f4-d87bbab7310b became leader
	I1001 20:26:22.982503       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-106982_256f5572-54ed-4aa8-89f4-d87bbab7310b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-106982 -n embed-certs-106982
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-106982 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-z27sl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-106982 describe pod metrics-server-6867b74b74-z27sl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-106982 describe pod metrics-server-6867b74b74-z27sl: exit status 1 (64.125167ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-z27sl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-106982 describe pod metrics-server-6867b74b74-z27sl: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1001 20:26:59.025368   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:27:57.913738   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-262337 -n no-preload-262337
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-01 20:35:47.446640048 +0000 UTC m=+6098.395443441
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-262337 -n no-preload-262337
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-262337 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-262337 logs -n 25: (1.331550083s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-402897                              | cert-expiration-402897       | jenkins | v1.34.0 | 01 Oct 24 20:12 UTC | 01 Oct 24 20:12 UTC |
	| start   | -p embed-certs-106982                                  | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:12 UTC | 01 Oct 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-262337             | no-preload-262337            | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC | 01 Oct 24 20:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-262337                                   | no-preload-262337            | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-106982            | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC | 01 Oct 24 20:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-106982                                  | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:14 UTC | 01 Oct 24 20:14 UTC |
	| start   | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:14 UTC | 01 Oct 24 20:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC | 01 Oct 24 20:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-359369        | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-262337                  | no-preload-262337            | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-262337                                   | no-preload-262337            | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC | 01 Oct 24 20:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-106982                 | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-556200 | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:16 UTC |
	|         | disable-driver-mounts-556200                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:21 UTC |
	|         | default-k8s-diff-port-878552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-106982                                  | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-359369                              | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:17 UTC | 01 Oct 24 20:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-359369             | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:17 UTC | 01 Oct 24 20:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-359369                              | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-878552  | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:22 UTC | 01 Oct 24 20:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:22 UTC |                     |
	|         | default-k8s-diff-port-878552                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-878552       | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:24 UTC | 01 Oct 24 20:34 UTC |
	|         | default-k8s-diff-port-878552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 20:24:40
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 20:24:40.832961   68418 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:24:40.833061   68418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:24:40.833066   68418 out.go:358] Setting ErrFile to fd 2...
	I1001 20:24:40.833070   68418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:24:40.833265   68418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 20:24:40.833818   68418 out.go:352] Setting JSON to false
	I1001 20:24:40.834796   68418 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7623,"bootTime":1727806658,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 20:24:40.834894   68418 start.go:139] virtualization: kvm guest
	I1001 20:24:40.837148   68418 out.go:177] * [default-k8s-diff-port-878552] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 20:24:40.838511   68418 notify.go:220] Checking for updates...
	I1001 20:24:40.838551   68418 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 20:24:40.839938   68418 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 20:24:40.841161   68418 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:24:40.842268   68418 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:24:40.843373   68418 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 20:24:40.844538   68418 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 20:24:40.846141   68418 config.go:182] Loaded profile config "default-k8s-diff-port-878552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:24:40.846513   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:24:40.846561   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:24:40.862168   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42661
	I1001 20:24:40.862628   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:24:40.863294   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:24:40.863326   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:24:40.863699   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:24:40.863903   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:24:40.864180   68418 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 20:24:40.864548   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:24:40.864620   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:24:40.880173   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45711
	I1001 20:24:40.880719   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:24:40.881220   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:24:40.881245   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:24:40.881581   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:24:40.881795   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:24:40.920802   68418 out.go:177] * Using the kvm2 driver based on existing profile
	I1001 20:24:40.921986   68418 start.go:297] selected driver: kvm2
	I1001 20:24:40.921999   68418 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-878552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-878552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.4 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:24:40.922122   68418 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 20:24:40.922802   68418 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:24:40.922895   68418 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 20:24:40.938386   68418 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 20:24:40.938811   68418 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:24:40.938841   68418 cni.go:84] Creating CNI manager for ""
	I1001 20:24:40.938880   68418 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:24:40.938931   68418 start.go:340] cluster config:
	{Name:default-k8s-diff-port-878552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-878552 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.4 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:24:40.939036   68418 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:24:40.940656   68418 out.go:177] * Starting "default-k8s-diff-port-878552" primary control-plane node in "default-k8s-diff-port-878552" cluster
	I1001 20:24:40.941946   68418 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 20:24:40.942006   68418 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 20:24:40.942023   68418 cache.go:56] Caching tarball of preloaded images
	I1001 20:24:40.942155   68418 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 20:24:40.942166   68418 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 20:24:40.942298   68418 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/config.json ...
	I1001 20:24:40.942537   68418 start.go:360] acquireMachinesLock for default-k8s-diff-port-878552: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 20:24:40.942581   68418 start.go:364] duration metric: took 24.859µs to acquireMachinesLock for "default-k8s-diff-port-878552"
	I1001 20:24:40.942601   68418 start.go:96] Skipping create...Using existing machine configuration
	I1001 20:24:40.942608   68418 fix.go:54] fixHost starting: 
	I1001 20:24:40.942921   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:24:40.942954   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:24:40.958447   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36711
	I1001 20:24:40.958976   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:24:40.960190   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:24:40.960223   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:24:40.960575   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:24:40.960770   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:24:40.960921   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:24:40.962765   68418 fix.go:112] recreateIfNeeded on default-k8s-diff-port-878552: state=Running err=<nil>
	W1001 20:24:40.962786   68418 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 20:24:40.964520   68418 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-878552" VM ...
	I1001 20:24:37.763268   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:40.262669   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:39.025570   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:39.040932   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:39.041011   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:39.076620   65592 cri.go:89] found id: ""
	I1001 20:24:39.076649   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.076659   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:39.076666   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:39.076734   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:39.113395   65592 cri.go:89] found id: ""
	I1001 20:24:39.113422   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.113430   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:39.113436   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:39.113490   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:39.147839   65592 cri.go:89] found id: ""
	I1001 20:24:39.147877   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.147890   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:39.147899   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:39.147966   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:39.179721   65592 cri.go:89] found id: ""
	I1001 20:24:39.179758   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.179769   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:39.179777   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:39.179842   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:39.211511   65592 cri.go:89] found id: ""
	I1001 20:24:39.211541   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.211549   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:39.211554   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:39.211603   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:39.243517   65592 cri.go:89] found id: ""
	I1001 20:24:39.243544   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.243552   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:39.243557   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:39.243623   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:39.276159   65592 cri.go:89] found id: ""
	I1001 20:24:39.276182   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.276189   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:39.276195   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:39.276239   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:39.307242   65592 cri.go:89] found id: ""
	I1001 20:24:39.307274   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.307285   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:39.307295   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:39.307307   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:39.387442   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:39.387486   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:39.423123   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:39.423156   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:39.474648   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:39.474686   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:39.488129   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:39.488158   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:39.557478   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:42.058114   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:42.071979   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:42.072056   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:42.110529   65592 cri.go:89] found id: ""
	I1001 20:24:42.110557   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.110565   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:42.110570   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:42.110619   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:42.145408   65592 cri.go:89] found id: ""
	I1001 20:24:42.145436   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.145445   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:42.145450   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:42.145509   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:42.180602   65592 cri.go:89] found id: ""
	I1001 20:24:42.180641   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.180655   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:42.180664   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:42.180722   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:38.119187   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:40.619080   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:40.965599   68418 machine.go:93] provisionDockerMachine start ...
	I1001 20:24:40.965619   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:24:40.965852   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:24:40.968710   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:24:40.969253   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:20:43 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:24:40.969286   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:24:40.969517   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:24:40.969724   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:24:40.969960   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:24:40.970112   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:24:40.970316   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:24:40.970570   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:24:40.970584   68418 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 20:24:43.860755   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:24:42.262933   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:44.762857   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:42.214116   65592 cri.go:89] found id: ""
	I1001 20:24:42.214148   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.214160   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:42.214168   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:42.214224   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:42.246785   65592 cri.go:89] found id: ""
	I1001 20:24:42.246814   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.246825   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:42.246832   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:42.246900   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:42.281586   65592 cri.go:89] found id: ""
	I1001 20:24:42.281633   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.281645   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:42.281660   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:42.281724   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:42.318982   65592 cri.go:89] found id: ""
	I1001 20:24:42.319015   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.319025   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:42.319032   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:42.319085   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:42.350592   65592 cri.go:89] found id: ""
	I1001 20:24:42.350619   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.350638   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:42.350646   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:42.350659   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:42.429111   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:42.429152   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:42.466741   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:42.466775   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:42.516829   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:42.516870   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:42.530174   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:42.530201   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:42.600444   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:45.101469   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:45.113821   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:45.113904   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:45.148105   65592 cri.go:89] found id: ""
	I1001 20:24:45.148132   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.148146   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:45.148152   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:45.148196   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:45.180980   65592 cri.go:89] found id: ""
	I1001 20:24:45.181012   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.181027   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:45.181046   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:45.181113   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:45.216971   65592 cri.go:89] found id: ""
	I1001 20:24:45.217001   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.217010   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:45.217015   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:45.217060   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:45.252240   65592 cri.go:89] found id: ""
	I1001 20:24:45.252275   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.252287   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:45.252294   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:45.252354   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:45.287389   65592 cri.go:89] found id: ""
	I1001 20:24:45.287419   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.287434   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:45.287440   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:45.287501   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:45.319980   65592 cri.go:89] found id: ""
	I1001 20:24:45.320015   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.320027   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:45.320035   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:45.320101   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:45.351894   65592 cri.go:89] found id: ""
	I1001 20:24:45.351920   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.351931   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:45.351936   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:45.351984   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:45.385370   65592 cri.go:89] found id: ""
	I1001 20:24:45.385400   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.385412   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:45.385423   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:45.385485   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:45.449558   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:45.449584   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:45.449596   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:45.524322   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:45.524372   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:45.560729   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:45.560757   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:45.614098   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:45.614139   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:43.119614   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:45.121666   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:47.618362   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:46.932587   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:24:47.263384   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:49.761472   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:48.129944   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:48.143420   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:48.143496   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:48.175627   65592 cri.go:89] found id: ""
	I1001 20:24:48.175668   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.175682   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:48.175689   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:48.175747   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:48.210422   65592 cri.go:89] found id: ""
	I1001 20:24:48.210451   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.210462   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:48.210470   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:48.210535   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:48.243916   65592 cri.go:89] found id: ""
	I1001 20:24:48.243952   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.243963   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:48.243972   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:48.244027   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:48.275802   65592 cri.go:89] found id: ""
	I1001 20:24:48.275830   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.275845   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:48.275857   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:48.275917   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:48.311539   65592 cri.go:89] found id: ""
	I1001 20:24:48.311569   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.311579   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:48.311586   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:48.311648   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:48.342606   65592 cri.go:89] found id: ""
	I1001 20:24:48.342646   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.342658   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:48.342666   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:48.342718   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:48.375554   65592 cri.go:89] found id: ""
	I1001 20:24:48.375581   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.375591   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:48.375597   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:48.375642   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:48.407747   65592 cri.go:89] found id: ""
	I1001 20:24:48.407776   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.407789   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:48.407800   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:48.407814   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:48.457470   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:48.457503   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:48.470483   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:48.470517   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:48.533536   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:48.533565   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:48.533580   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:48.614530   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:48.614571   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:51.157091   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:51.170292   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:51.170364   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:51.203784   65592 cri.go:89] found id: ""
	I1001 20:24:51.203809   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.203822   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:51.203828   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:51.203917   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:51.239789   65592 cri.go:89] found id: ""
	I1001 20:24:51.239826   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.239834   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:51.239840   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:51.239889   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:51.274562   65592 cri.go:89] found id: ""
	I1001 20:24:51.274595   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.274607   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:51.274617   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:51.274701   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:51.306172   65592 cri.go:89] found id: ""
	I1001 20:24:51.306199   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.306207   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:51.306213   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:51.306269   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:51.339631   65592 cri.go:89] found id: ""
	I1001 20:24:51.339660   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.339668   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:51.339674   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:51.339725   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:51.372128   65592 cri.go:89] found id: ""
	I1001 20:24:51.372154   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.372163   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:51.372169   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:51.372223   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:51.403790   65592 cri.go:89] found id: ""
	I1001 20:24:51.403818   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.403828   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:51.403842   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:51.403890   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:51.437771   65592 cri.go:89] found id: ""
	I1001 20:24:51.437799   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.437808   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:51.437816   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:51.437827   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:51.489824   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:51.489864   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:51.503478   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:51.503508   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:51.573741   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:51.573768   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:51.573780   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:51.662355   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:51.662391   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:49.618685   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:51.619186   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:53.012639   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:24:51.761853   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:53.762442   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:56.261818   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:54.199747   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:54.212731   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:54.212797   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:54.244554   65592 cri.go:89] found id: ""
	I1001 20:24:54.244586   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.244596   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:54.244602   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:54.244652   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:54.280636   65592 cri.go:89] found id: ""
	I1001 20:24:54.280667   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.280679   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:54.280686   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:54.280737   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:54.318213   65592 cri.go:89] found id: ""
	I1001 20:24:54.318246   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.318257   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:54.318265   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:54.318321   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:54.353563   65592 cri.go:89] found id: ""
	I1001 20:24:54.353595   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.353606   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:54.353615   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:54.353678   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:54.387770   65592 cri.go:89] found id: ""
	I1001 20:24:54.387795   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.387803   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:54.387809   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:54.387869   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:54.421289   65592 cri.go:89] found id: ""
	I1001 20:24:54.421317   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.421325   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:54.421332   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:54.421382   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:54.456221   65592 cri.go:89] found id: ""
	I1001 20:24:54.456261   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.456274   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:54.456282   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:54.456348   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:54.488174   65592 cri.go:89] found id: ""
	I1001 20:24:54.488208   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.488219   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:54.488228   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:54.488241   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:54.540981   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:54.541020   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:54.554099   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:54.554129   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:54.623978   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:54.624013   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:54.624034   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:54.704703   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:54.704738   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:54.119129   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:56.619282   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:56.088698   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:24:58.262173   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:00.761865   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:57.241791   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:57.254771   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:57.254843   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:57.290226   65592 cri.go:89] found id: ""
	I1001 20:24:57.290263   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.290271   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:57.290277   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:57.290336   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:57.324910   65592 cri.go:89] found id: ""
	I1001 20:24:57.324938   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.324946   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:57.324951   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:57.325068   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:57.360553   65592 cri.go:89] found id: ""
	I1001 20:24:57.360586   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.360601   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:57.360608   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:57.360669   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:57.395182   65592 cri.go:89] found id: ""
	I1001 20:24:57.395216   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.395229   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:57.395236   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:57.395296   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:57.428967   65592 cri.go:89] found id: ""
	I1001 20:24:57.428998   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.429011   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:57.429017   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:57.429072   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:57.462483   65592 cri.go:89] found id: ""
	I1001 20:24:57.462511   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.462519   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:57.462525   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:57.462581   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:57.495505   65592 cri.go:89] found id: ""
	I1001 20:24:57.495538   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.495550   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:57.495556   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:57.495615   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:57.528132   65592 cri.go:89] found id: ""
	I1001 20:24:57.528164   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.528176   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:57.528188   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:57.528203   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:57.596557   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:57.596583   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:57.596598   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:57.676797   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:57.676830   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:57.714624   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:57.714653   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:57.763801   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:57.763839   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:00.277808   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:00.291432   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:00.291489   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:00.327524   65592 cri.go:89] found id: ""
	I1001 20:25:00.327554   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.327562   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:00.327568   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:00.327618   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:00.364125   65592 cri.go:89] found id: ""
	I1001 20:25:00.364153   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.364162   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:00.364167   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:00.364229   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:00.404507   65592 cri.go:89] found id: ""
	I1001 20:25:00.404543   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.404555   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:00.404564   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:00.404770   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:00.438761   65592 cri.go:89] found id: ""
	I1001 20:25:00.438792   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.438800   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:00.438807   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:00.438862   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:00.473263   65592 cri.go:89] found id: ""
	I1001 20:25:00.473301   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.473313   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:00.473321   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:00.473391   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:00.510276   65592 cri.go:89] found id: ""
	I1001 20:25:00.510307   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.510317   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:00.510324   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:00.510383   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:00.545118   65592 cri.go:89] found id: ""
	I1001 20:25:00.545149   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.545165   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:00.545173   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:00.545229   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:00.577773   65592 cri.go:89] found id: ""
	I1001 20:25:00.577799   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.577810   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:00.577821   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:00.577835   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:00.628978   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:00.629012   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:00.642192   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:00.642225   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:00.711399   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:00.711432   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:00.711446   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:00.792477   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:00.792514   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:59.118041   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:01.119565   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:02.164636   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:05.236638   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:02.762323   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:04.764910   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:03.332492   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:03.347542   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:03.347622   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:03.388263   65592 cri.go:89] found id: ""
	I1001 20:25:03.388292   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.388300   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:03.388306   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:03.388353   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:03.421489   65592 cri.go:89] found id: ""
	I1001 20:25:03.421525   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.421534   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:03.421539   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:03.421634   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:03.457139   65592 cri.go:89] found id: ""
	I1001 20:25:03.457172   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.457182   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:03.457189   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:03.457251   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:03.497203   65592 cri.go:89] found id: ""
	I1001 20:25:03.497232   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.497241   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:03.497247   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:03.497313   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:03.535137   65592 cri.go:89] found id: ""
	I1001 20:25:03.535163   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.535171   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:03.535176   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:03.535221   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:03.569131   65592 cri.go:89] found id: ""
	I1001 20:25:03.569158   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.569166   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:03.569171   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:03.569217   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:03.605289   65592 cri.go:89] found id: ""
	I1001 20:25:03.605321   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.605329   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:03.605336   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:03.605389   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:03.651086   65592 cri.go:89] found id: ""
	I1001 20:25:03.651115   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.651123   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:03.651134   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:03.651145   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:03.731256   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:03.731281   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:03.731299   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:03.809393   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:03.809442   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:03.849171   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:03.849198   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:03.898009   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:03.898045   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:06.411962   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:06.425432   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:06.425513   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:06.463339   65592 cri.go:89] found id: ""
	I1001 20:25:06.463371   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.463383   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:06.463391   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:06.463455   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:06.502527   65592 cri.go:89] found id: ""
	I1001 20:25:06.502561   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.502569   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:06.502611   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:06.502687   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:06.547428   65592 cri.go:89] found id: ""
	I1001 20:25:06.547465   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.547474   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:06.547480   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:06.547539   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:06.581672   65592 cri.go:89] found id: ""
	I1001 20:25:06.581699   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.581708   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:06.581713   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:06.581769   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:06.615391   65592 cri.go:89] found id: ""
	I1001 20:25:06.615436   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.615449   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:06.615457   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:06.615525   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:06.651019   65592 cri.go:89] found id: ""
	I1001 20:25:06.651050   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.651060   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:06.651067   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:06.651142   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:06.687887   65592 cri.go:89] found id: ""
	I1001 20:25:06.687912   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.687922   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:06.687929   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:06.687982   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:06.729234   65592 cri.go:89] found id: ""
	I1001 20:25:06.729263   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.729273   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:06.729282   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:06.729296   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:06.747295   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:06.747326   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:06.816480   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:06.816511   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:06.816524   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:06.896918   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:06.896957   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:06.938922   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:06.938958   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:03.619205   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:06.118575   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:06.765214   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:09.261806   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:11.262162   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:09.494252   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:09.508085   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:09.508171   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:09.542999   65592 cri.go:89] found id: ""
	I1001 20:25:09.543029   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.543037   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:09.543043   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:09.543100   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:09.578112   65592 cri.go:89] found id: ""
	I1001 20:25:09.578137   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.578145   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:09.578150   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:09.578199   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:09.613123   65592 cri.go:89] found id: ""
	I1001 20:25:09.613150   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.613158   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:09.613166   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:09.613223   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:09.648172   65592 cri.go:89] found id: ""
	I1001 20:25:09.648214   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.648223   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:09.648230   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:09.648302   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:09.681217   65592 cri.go:89] found id: ""
	I1001 20:25:09.681244   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.681254   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:09.681261   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:09.681320   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:09.718166   65592 cri.go:89] found id: ""
	I1001 20:25:09.718196   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.718204   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:09.718212   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:09.718272   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:09.751910   65592 cri.go:89] found id: ""
	I1001 20:25:09.751942   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.751951   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:09.751956   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:09.752004   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:09.789213   65592 cri.go:89] found id: ""
	I1001 20:25:09.789237   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.789246   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:09.789254   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:09.789265   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:09.826746   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:09.826780   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:09.879079   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:09.879123   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:09.892480   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:09.892507   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:09.967048   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:09.967084   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:09.967103   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:08.118822   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:10.120018   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:12.620582   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:14.356624   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:13.262286   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:15.263349   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:12.545057   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:12.557888   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:12.557969   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:12.594881   65592 cri.go:89] found id: ""
	I1001 20:25:12.594928   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.594942   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:12.594952   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:12.595021   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:12.631393   65592 cri.go:89] found id: ""
	I1001 20:25:12.631425   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.631437   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:12.631445   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:12.631504   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:12.666442   65592 cri.go:89] found id: ""
	I1001 20:25:12.666476   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.666486   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:12.666493   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:12.666548   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:12.703321   65592 cri.go:89] found id: ""
	I1001 20:25:12.703359   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.703371   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:12.703379   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:12.703444   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:12.742188   65592 cri.go:89] found id: ""
	I1001 20:25:12.742216   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.742224   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:12.742230   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:12.742276   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:12.781829   65592 cri.go:89] found id: ""
	I1001 20:25:12.781859   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.781869   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:12.781876   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:12.781940   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:12.815368   65592 cri.go:89] found id: ""
	I1001 20:25:12.815397   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.815405   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:12.815411   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:12.815463   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:12.850913   65592 cri.go:89] found id: ""
	I1001 20:25:12.850941   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.850949   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:12.850958   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:12.850968   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:12.901409   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:12.901443   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:12.914517   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:12.914567   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:12.980086   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:12.980119   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:12.980135   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:13.055950   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:13.055989   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:15.595692   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:15.609648   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:15.609728   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:15.645477   65592 cri.go:89] found id: ""
	I1001 20:25:15.645502   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.645510   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:15.645514   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:15.645558   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:15.679674   65592 cri.go:89] found id: ""
	I1001 20:25:15.679702   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.679711   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:15.679717   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:15.679774   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:15.718057   65592 cri.go:89] found id: ""
	I1001 20:25:15.718082   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.718092   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:15.718097   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:15.718153   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:15.754094   65592 cri.go:89] found id: ""
	I1001 20:25:15.754121   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.754130   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:15.754136   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:15.754189   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:15.790415   65592 cri.go:89] found id: ""
	I1001 20:25:15.790450   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.790464   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:15.790472   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:15.790535   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:15.825603   65592 cri.go:89] found id: ""
	I1001 20:25:15.825630   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.825645   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:15.825653   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:15.825717   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:15.861330   65592 cri.go:89] found id: ""
	I1001 20:25:15.861356   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.861368   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:15.861375   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:15.861451   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:15.897534   65592 cri.go:89] found id: ""
	I1001 20:25:15.897564   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.897575   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:15.897584   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:15.897598   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:15.972842   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:15.972881   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:16.010625   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:16.010653   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:16.062717   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:16.062762   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:16.076538   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:16.076568   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:16.156886   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:15.118878   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:17.119791   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:17.428649   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:17.764089   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:20.261752   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:18.657436   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:18.673018   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:18.673093   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:18.708040   65592 cri.go:89] found id: ""
	I1001 20:25:18.708078   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.708091   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:18.708100   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:18.708167   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:18.740152   65592 cri.go:89] found id: ""
	I1001 20:25:18.740188   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.740200   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:18.740207   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:18.740264   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:18.778238   65592 cri.go:89] found id: ""
	I1001 20:25:18.778270   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.778279   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:18.778287   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:18.778351   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:18.815450   65592 cri.go:89] found id: ""
	I1001 20:25:18.815489   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.815503   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:18.815512   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:18.815576   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:18.850008   65592 cri.go:89] found id: ""
	I1001 20:25:18.850038   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.850047   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:18.850053   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:18.850104   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:18.890919   65592 cri.go:89] found id: ""
	I1001 20:25:18.890943   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.890951   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:18.890957   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:18.891004   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:18.934196   65592 cri.go:89] found id: ""
	I1001 20:25:18.934228   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.934240   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:18.934247   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:18.934307   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:18.977817   65592 cri.go:89] found id: ""
	I1001 20:25:18.977850   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.977862   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:18.977875   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:18.977889   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:19.039867   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:19.039910   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:19.054277   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:19.054310   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:19.125736   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:19.125765   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:19.125782   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:19.208588   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:19.208622   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:21.750881   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:21.766638   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:21.766712   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:21.801906   65592 cri.go:89] found id: ""
	I1001 20:25:21.801930   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.801938   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:21.801944   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:21.801990   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:21.842801   65592 cri.go:89] found id: ""
	I1001 20:25:21.842830   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.842844   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:21.842852   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:21.842917   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:21.876550   65592 cri.go:89] found id: ""
	I1001 20:25:21.876577   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.876588   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:21.876594   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:21.876647   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:21.910972   65592 cri.go:89] found id: ""
	I1001 20:25:21.911007   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.911016   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:21.911022   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:21.911098   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:21.945721   65592 cri.go:89] found id: ""
	I1001 20:25:21.945753   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.945765   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:21.945773   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:21.945833   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:21.982101   65592 cri.go:89] found id: ""
	I1001 20:25:21.982131   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.982143   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:21.982151   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:21.982242   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:22.016526   65592 cri.go:89] found id: ""
	I1001 20:25:22.016558   65592 logs.go:276] 0 containers: []
	W1001 20:25:22.016569   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:22.016577   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:22.016632   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:22.054792   65592 cri.go:89] found id: ""
	I1001 20:25:22.054822   65592 logs.go:276] 0 containers: []
	W1001 20:25:22.054833   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:22.054844   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:22.054863   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:22.105936   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:22.105974   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:22.120834   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:22.120858   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:22.195177   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:22.195211   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:22.195228   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:19.120304   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:21.618511   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:23.512698   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:22.264134   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:24.762355   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:22.281244   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:22.281285   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:24.824197   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:24.840967   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:24.841030   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:24.882399   65592 cri.go:89] found id: ""
	I1001 20:25:24.882429   65592 logs.go:276] 0 containers: []
	W1001 20:25:24.882443   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:24.882449   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:24.882497   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:24.935548   65592 cri.go:89] found id: ""
	I1001 20:25:24.935581   65592 logs.go:276] 0 containers: []
	W1001 20:25:24.935590   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:24.935596   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:24.935644   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:24.976931   65592 cri.go:89] found id: ""
	I1001 20:25:24.976958   65592 logs.go:276] 0 containers: []
	W1001 20:25:24.976969   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:24.976976   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:24.977035   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:25.009926   65592 cri.go:89] found id: ""
	I1001 20:25:25.009959   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.009968   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:25.009975   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:25.010039   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:25.043261   65592 cri.go:89] found id: ""
	I1001 20:25:25.043299   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.043310   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:25.043316   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:25.043377   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:25.075177   65592 cri.go:89] found id: ""
	I1001 20:25:25.075205   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.075214   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:25.075221   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:25.075267   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:25.109792   65592 cri.go:89] found id: ""
	I1001 20:25:25.109832   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.109845   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:25.109871   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:25.109942   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:25.148721   65592 cri.go:89] found id: ""
	I1001 20:25:25.148753   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.148763   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:25.148772   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:25.148790   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:25.161802   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:25.161841   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:25.227699   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:25.227732   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:25.227750   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:25.314028   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:25.314075   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:25.354881   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:25.354919   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:23.618792   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:26.118493   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:26.580628   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:27.262584   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:29.761866   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:27.906936   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:27.920745   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:27.920806   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:27.955399   65592 cri.go:89] found id: ""
	I1001 20:25:27.955426   65592 logs.go:276] 0 containers: []
	W1001 20:25:27.955444   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:27.955450   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:27.955503   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:27.993714   65592 cri.go:89] found id: ""
	I1001 20:25:27.993747   65592 logs.go:276] 0 containers: []
	W1001 20:25:27.993759   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:27.993766   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:27.993827   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:28.028439   65592 cri.go:89] found id: ""
	I1001 20:25:28.028475   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.028487   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:28.028494   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:28.028563   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:28.072935   65592 cri.go:89] found id: ""
	I1001 20:25:28.072966   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.072977   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:28.072985   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:28.073050   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:28.107241   65592 cri.go:89] found id: ""
	I1001 20:25:28.107275   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.107285   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:28.107293   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:28.107357   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:28.141382   65592 cri.go:89] found id: ""
	I1001 20:25:28.141412   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.141423   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:28.141431   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:28.141494   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:28.175749   65592 cri.go:89] found id: ""
	I1001 20:25:28.175782   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.175794   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:28.175801   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:28.175864   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:28.214968   65592 cri.go:89] found id: ""
	I1001 20:25:28.214997   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.215006   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:28.215015   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:28.215027   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:28.259588   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:28.259619   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:28.314439   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:28.314480   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:28.327938   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:28.327967   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:28.399479   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:28.399508   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:28.399523   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:30.978863   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:30.991415   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:30.991493   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:31.026443   65592 cri.go:89] found id: ""
	I1001 20:25:31.026480   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.026494   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:31.026513   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:31.026576   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:31.060635   65592 cri.go:89] found id: ""
	I1001 20:25:31.060663   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.060678   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:31.060684   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:31.060743   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:31.095494   65592 cri.go:89] found id: ""
	I1001 20:25:31.095525   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.095533   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:31.095540   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:31.095587   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:31.130693   65592 cri.go:89] found id: ""
	I1001 20:25:31.130718   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.130728   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:31.130741   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:31.130802   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:31.167928   65592 cri.go:89] found id: ""
	I1001 20:25:31.167960   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.167973   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:31.167980   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:31.168033   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:31.202813   65592 cri.go:89] found id: ""
	I1001 20:25:31.202843   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.202855   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:31.202864   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:31.202925   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:31.240424   65592 cri.go:89] found id: ""
	I1001 20:25:31.240459   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.240468   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:31.240474   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:31.240521   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:31.275470   65592 cri.go:89] found id: ""
	I1001 20:25:31.275502   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.275510   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:31.275518   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:31.275529   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:31.329604   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:31.329642   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:31.342695   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:31.342724   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:31.410169   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:31.410275   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:31.410303   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:31.489630   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:31.489677   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:28.118608   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:30.118718   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:32.119227   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:32.660640   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:35.732653   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:31.762062   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:33.764597   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:36.263251   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:34.027406   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:34.039902   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:34.039975   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:34.074992   65592 cri.go:89] found id: ""
	I1001 20:25:34.075025   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.075038   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:34.075045   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:34.075106   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:34.110264   65592 cri.go:89] found id: ""
	I1001 20:25:34.110293   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.110304   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:34.110311   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:34.110371   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:34.147097   65592 cri.go:89] found id: ""
	I1001 20:25:34.147132   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.147143   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:34.147151   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:34.147208   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:34.179453   65592 cri.go:89] found id: ""
	I1001 20:25:34.179481   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.179491   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:34.179500   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:34.179554   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:34.212407   65592 cri.go:89] found id: ""
	I1001 20:25:34.212433   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.212442   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:34.212449   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:34.212495   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:34.244400   65592 cri.go:89] found id: ""
	I1001 20:25:34.244429   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.244440   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:34.244447   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:34.244510   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:34.278423   65592 cri.go:89] found id: ""
	I1001 20:25:34.278448   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.278458   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:34.278464   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:34.278520   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:34.311019   65592 cri.go:89] found id: ""
	I1001 20:25:34.311049   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.311059   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:34.311072   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:34.311083   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:34.347521   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:34.347549   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:34.400717   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:34.400754   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:34.414550   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:34.414576   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:34.486478   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:34.486503   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:34.486519   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:37.071687   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:37.084941   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:37.085025   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:37.119834   65592 cri.go:89] found id: ""
	I1001 20:25:37.119862   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.119870   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:37.119875   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:37.119984   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:37.154795   65592 cri.go:89] found id: ""
	I1001 20:25:37.154832   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.154851   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:37.154867   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:37.154927   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:37.191552   65592 cri.go:89] found id: ""
	I1001 20:25:37.191581   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.191592   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:37.191599   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:37.191670   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:34.119370   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:36.119698   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:38.761540   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:40.762894   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:37.228883   65592 cri.go:89] found id: ""
	I1001 20:25:37.228918   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.228928   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:37.228936   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:37.229000   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:37.263533   65592 cri.go:89] found id: ""
	I1001 20:25:37.263558   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.263568   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:37.263577   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:37.263638   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:37.297367   65592 cri.go:89] found id: ""
	I1001 20:25:37.297401   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.297414   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:37.297422   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:37.297486   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:37.331091   65592 cri.go:89] found id: ""
	I1001 20:25:37.331121   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.331129   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:37.331135   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:37.331202   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:37.364861   65592 cri.go:89] found id: ""
	I1001 20:25:37.364889   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.364897   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:37.364905   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:37.364916   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:37.417507   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:37.417545   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:37.431613   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:37.431646   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:37.497821   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:37.497846   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:37.497861   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:37.578951   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:37.578996   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:40.121350   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:40.134553   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:40.134634   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:40.169277   65592 cri.go:89] found id: ""
	I1001 20:25:40.169313   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.169325   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:40.169333   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:40.169399   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:40.204111   65592 cri.go:89] found id: ""
	I1001 20:25:40.204144   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.204153   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:40.204159   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:40.204206   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:40.237841   65592 cri.go:89] found id: ""
	I1001 20:25:40.237872   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.237880   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:40.237886   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:40.237942   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:40.273081   65592 cri.go:89] found id: ""
	I1001 20:25:40.273108   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.273117   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:40.273123   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:40.273186   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:40.307351   65592 cri.go:89] found id: ""
	I1001 20:25:40.307384   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.307394   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:40.307399   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:40.307462   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:40.340543   65592 cri.go:89] found id: ""
	I1001 20:25:40.340569   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.340578   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:40.340584   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:40.340655   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:40.376070   65592 cri.go:89] found id: ""
	I1001 20:25:40.376112   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.376123   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:40.376130   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:40.376194   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:40.410236   65592 cri.go:89] found id: ""
	I1001 20:25:40.410267   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.410279   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:40.410289   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:40.410300   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:40.463799   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:40.463835   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:40.478403   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:40.478436   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:40.547250   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:40.547279   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:40.547291   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:40.630061   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:40.630098   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:38.617891   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:40.618430   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:41.612771   65263 pod_ready.go:82] duration metric: took 4m0.000338317s for pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace to be "Ready" ...
	E1001 20:25:41.612803   65263 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace to be "Ready" (will not retry!)
	I1001 20:25:41.612832   65263 pod_ready.go:39] duration metric: took 4m13.169141642s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:25:41.612859   65263 kubeadm.go:597] duration metric: took 4m21.203039001s to restartPrimaryControlPlane
	W1001 20:25:41.612919   65263 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1001 20:25:41.612944   65263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 20:25:41.812689   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:44.884661   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:43.264334   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:45.762034   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:43.170764   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:43.183046   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:43.183124   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:43.222995   65592 cri.go:89] found id: ""
	I1001 20:25:43.223029   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.223038   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:43.223044   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:43.223105   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:43.256861   65592 cri.go:89] found id: ""
	I1001 20:25:43.256891   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.256902   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:43.256910   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:43.257002   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:43.292643   65592 cri.go:89] found id: ""
	I1001 20:25:43.292687   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.292698   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:43.292704   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:43.292754   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:43.326539   65592 cri.go:89] found id: ""
	I1001 20:25:43.326568   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.326576   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:43.326582   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:43.326628   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:43.359787   65592 cri.go:89] found id: ""
	I1001 20:25:43.359813   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.359822   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:43.359828   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:43.359890   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:43.392045   65592 cri.go:89] found id: ""
	I1001 20:25:43.392076   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.392086   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:43.392092   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:43.392145   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:43.429498   65592 cri.go:89] found id: ""
	I1001 20:25:43.429529   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.429538   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:43.429544   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:43.429591   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:43.462728   65592 cri.go:89] found id: ""
	I1001 20:25:43.462760   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.462771   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:43.462781   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:43.462798   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:43.512683   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:43.512717   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:43.527253   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:43.527285   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:43.598963   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:43.598989   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:43.599003   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:43.679743   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:43.679790   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:46.217101   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:46.230349   65592 kubeadm.go:597] duration metric: took 4m1.895228035s to restartPrimaryControlPlane
	W1001 20:25:46.230421   65592 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1001 20:25:46.230450   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 20:25:47.762241   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:49.763115   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:47.271291   65592 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.040818559s)
	I1001 20:25:47.271362   65592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:25:47.285083   65592 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:25:47.295774   65592 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:25:47.305487   65592 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:25:47.305511   65592 kubeadm.go:157] found existing configuration files:
	
	I1001 20:25:47.305568   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:25:47.314488   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:25:47.314573   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:25:47.323852   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:25:47.332496   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:25:47.332553   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:25:47.341236   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:25:47.349932   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:25:47.350002   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:25:47.359345   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:25:47.369180   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:25:47.369233   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:25:47.378232   65592 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:25:47.595501   65592 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:25:50.964640   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:54.036635   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:52.261890   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:54.761886   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:00.116640   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:57.261837   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:59.262445   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:01.262529   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:03.188675   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:03.762361   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:06.261749   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:07.708438   65263 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.095470945s)
	I1001 20:26:07.708514   65263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:26:07.722982   65263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:26:07.732118   65263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:26:07.741172   65263 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:26:07.741198   65263 kubeadm.go:157] found existing configuration files:
	
	I1001 20:26:07.741244   65263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:26:07.749683   65263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:26:07.749744   65263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:26:07.758875   65263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:26:07.767668   65263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:26:07.767739   65263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:26:07.776648   65263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:26:07.785930   65263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:26:07.785982   65263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:26:07.794739   65263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:26:07.803180   65263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:26:07.803241   65263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:26:07.812178   65263 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:26:07.851817   65263 kubeadm.go:310] W1001 20:26:07.836874    2546 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:26:07.852402   65263 kubeadm.go:310] W1001 20:26:07.837670    2546 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:26:09.272541   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:08.761247   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:10.761797   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:07.957551   65263 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:26:12.344653   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:16.385918   65263 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 20:26:16.385979   65263 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:26:16.386062   65263 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:26:16.386172   65263 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:26:16.386297   65263 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 20:26:16.386400   65263 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:26:16.387827   65263 out.go:235]   - Generating certificates and keys ...
	I1001 20:26:16.387909   65263 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:26:16.387989   65263 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:26:16.388104   65263 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 20:26:16.388191   65263 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 20:26:16.388284   65263 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 20:26:16.388370   65263 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 20:26:16.388464   65263 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 20:26:16.388545   65263 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 20:26:16.388646   65263 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 20:26:16.388775   65263 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 20:26:16.388824   65263 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 20:26:16.388908   65263 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:26:16.388956   65263 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:26:16.389006   65263 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 20:26:16.389048   65263 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:26:16.389117   65263 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:26:16.389201   65263 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:26:16.389333   65263 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:26:16.389444   65263 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:26:16.390823   65263 out.go:235]   - Booting up control plane ...
	I1001 20:26:16.390917   65263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:26:16.390992   65263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:26:16.391061   65263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:26:16.391161   65263 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:26:16.391285   65263 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:26:16.391335   65263 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:26:16.391468   65263 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 20:26:16.391572   65263 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 20:26:16.391628   65263 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.349149ms
	I1001 20:26:16.391686   65263 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 20:26:16.391736   65263 kubeadm.go:310] [api-check] The API server is healthy after 5.002046172s
	I1001 20:26:16.391818   65263 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 20:26:16.391923   65263 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 20:26:16.391999   65263 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 20:26:16.392169   65263 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-106982 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 20:26:16.392225   65263 kubeadm.go:310] [bootstrap-token] Using token: xlxn2k.owwnzt3amr4nx0st
	I1001 20:26:16.393437   65263 out.go:235]   - Configuring RBAC rules ...
	I1001 20:26:16.393539   65263 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 20:26:16.393609   65263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 20:26:16.393722   65263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 20:26:16.393834   65263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 20:26:16.393940   65263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 20:26:16.394017   65263 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 20:26:16.394117   65263 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 20:26:16.394154   65263 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 20:26:16.394195   65263 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 20:26:16.394200   65263 kubeadm.go:310] 
	I1001 20:26:16.394259   65263 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 20:26:16.394269   65263 kubeadm.go:310] 
	I1001 20:26:16.394335   65263 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 20:26:16.394341   65263 kubeadm.go:310] 
	I1001 20:26:16.394363   65263 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 20:26:16.394440   65263 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 20:26:16.394496   65263 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 20:26:16.394502   65263 kubeadm.go:310] 
	I1001 20:26:16.394553   65263 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 20:26:16.394559   65263 kubeadm.go:310] 
	I1001 20:26:16.394601   65263 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 20:26:16.394611   65263 kubeadm.go:310] 
	I1001 20:26:16.394656   65263 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 20:26:16.394720   65263 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 20:26:16.394804   65263 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 20:26:16.394814   65263 kubeadm.go:310] 
	I1001 20:26:16.394901   65263 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 20:26:16.394996   65263 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 20:26:16.395010   65263 kubeadm.go:310] 
	I1001 20:26:16.395128   65263 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xlxn2k.owwnzt3amr4nx0st \
	I1001 20:26:16.395262   65263 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 \
	I1001 20:26:16.395299   65263 kubeadm.go:310] 	--control-plane 
	I1001 20:26:16.395308   65263 kubeadm.go:310] 
	I1001 20:26:16.395426   65263 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 20:26:16.395436   65263 kubeadm.go:310] 
	I1001 20:26:16.395548   65263 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xlxn2k.owwnzt3amr4nx0st \
	I1001 20:26:16.395648   65263 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 
	I1001 20:26:16.395658   65263 cni.go:84] Creating CNI manager for ""
	I1001 20:26:16.395665   65263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:26:16.396852   65263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 20:26:12.763435   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:15.262381   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:16.398081   65263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 20:26:16.407920   65263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 20:26:16.428213   65263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 20:26:16.428312   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:16.428344   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-106982 minikube.k8s.io/updated_at=2024_10_01T20_26_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=embed-certs-106982 minikube.k8s.io/primary=true
	I1001 20:26:16.667876   65263 ops.go:34] apiserver oom_adj: -16
	I1001 20:26:16.667891   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:17.168194   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:17.668772   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:18.168815   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:18.668087   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:19.168767   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:19.668624   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:20.167974   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:20.668002   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:20.758486   65263 kubeadm.go:1113] duration metric: took 4.330238814s to wait for elevateKubeSystemPrivileges
	I1001 20:26:20.758520   65263 kubeadm.go:394] duration metric: took 5m0.403602376s to StartCluster
	I1001 20:26:20.758539   65263 settings.go:142] acquiring lock: {Name:mkeb3fe64ed992373f048aef24eb0d675bcab60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:26:20.758613   65263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:26:20.760430   65263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/kubeconfig: {Name:mk7ef19d546928001abf2478a3abe1d17765a591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:26:20.760678   65263 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 20:26:20.760746   65263 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 20:26:20.760852   65263 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-106982"
	I1001 20:26:20.760881   65263 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-106982"
	I1001 20:26:20.760877   65263 addons.go:69] Setting default-storageclass=true in profile "embed-certs-106982"
	W1001 20:26:20.760893   65263 addons.go:243] addon storage-provisioner should already be in state true
	I1001 20:26:20.760891   65263 addons.go:69] Setting metrics-server=true in profile "embed-certs-106982"
	I1001 20:26:20.760926   65263 host.go:66] Checking if "embed-certs-106982" exists ...
	I1001 20:26:20.760926   65263 addons.go:234] Setting addon metrics-server=true in "embed-certs-106982"
	W1001 20:26:20.761009   65263 addons.go:243] addon metrics-server should already be in state true
	I1001 20:26:20.761041   65263 host.go:66] Checking if "embed-certs-106982" exists ...
	I1001 20:26:20.760906   65263 config.go:182] Loaded profile config "embed-certs-106982": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:26:20.760902   65263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-106982"
	I1001 20:26:20.761374   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.761426   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.761429   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.761468   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.761545   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.761591   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.762861   65263 out.go:177] * Verifying Kubernetes components...
	I1001 20:26:20.764393   65263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:26:20.778448   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38619
	I1001 20:26:20.779031   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.779198   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39861
	I1001 20:26:20.779632   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.779657   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.779822   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.780085   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.780331   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.780352   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.780789   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.780829   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.781030   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.781240   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetState
	I1001 20:26:20.781260   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37397
	I1001 20:26:20.781672   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.782168   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.782189   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.782587   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.783037   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.783073   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.784573   65263 addons.go:234] Setting addon default-storageclass=true in "embed-certs-106982"
	W1001 20:26:20.784589   65263 addons.go:243] addon default-storageclass should already be in state true
	I1001 20:26:20.784609   65263 host.go:66] Checking if "embed-certs-106982" exists ...
	I1001 20:26:20.784877   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.784912   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.797787   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37113
	I1001 20:26:20.797864   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33201
	I1001 20:26:20.798261   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.798311   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.798836   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.798855   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.798931   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.798951   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.799226   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I1001 20:26:20.799230   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.799367   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.799409   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetState
	I1001 20:26:20.799515   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetState
	I1001 20:26:20.799695   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.800114   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.800130   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.800602   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.801316   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.801331   65263 main.go:141] libmachine: (embed-certs-106982) Calling .DriverName
	I1001 20:26:20.801351   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.801391   65263 main.go:141] libmachine: (embed-certs-106982) Calling .DriverName
	I1001 20:26:20.803237   65263 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1001 20:26:20.803241   65263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:26:18.420597   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:17.762869   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:20.262479   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:20.804378   65263 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1001 20:26:20.804394   65263 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1001 20:26:20.804411   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHHostname
	I1001 20:26:20.804571   65263 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:26:20.804586   65263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 20:26:20.804603   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHHostname
	I1001 20:26:20.808458   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.808866   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.808906   65263 main.go:141] libmachine: (embed-certs-106982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:ab:67", ip: ""} in network mk-embed-certs-106982: {Iface:virbr1 ExpiryTime:2024-10-01 21:21:05 +0000 UTC Type:0 Mac:52:54:00:7f:ab:67 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:embed-certs-106982 Clientid:01:52:54:00:7f:ab:67}
	I1001 20:26:20.808923   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined IP address 192.168.39.203 and MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.809183   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHPort
	I1001 20:26:20.809326   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHKeyPath
	I1001 20:26:20.809462   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHUsername
	I1001 20:26:20.809582   65263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/embed-certs-106982/id_rsa Username:docker}
	I1001 20:26:20.809917   65263 main.go:141] libmachine: (embed-certs-106982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:ab:67", ip: ""} in network mk-embed-certs-106982: {Iface:virbr1 ExpiryTime:2024-10-01 21:21:05 +0000 UTC Type:0 Mac:52:54:00:7f:ab:67 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:embed-certs-106982 Clientid:01:52:54:00:7f:ab:67}
	I1001 20:26:20.809941   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined IP address 192.168.39.203 and MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.809975   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHPort
	I1001 20:26:20.810172   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHKeyPath
	I1001 20:26:20.810320   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHUsername
	I1001 20:26:20.810498   65263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/embed-certs-106982/id_rsa Username:docker}
	I1001 20:26:20.818676   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33699
	I1001 20:26:20.819066   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.819574   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.819596   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.819900   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.820110   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetState
	I1001 20:26:20.821633   65263 main.go:141] libmachine: (embed-certs-106982) Calling .DriverName
	I1001 20:26:20.821820   65263 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 20:26:20.821834   65263 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 20:26:20.821852   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHHostname
	I1001 20:26:20.824684   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.825165   65263 main.go:141] libmachine: (embed-certs-106982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:ab:67", ip: ""} in network mk-embed-certs-106982: {Iface:virbr1 ExpiryTime:2024-10-01 21:21:05 +0000 UTC Type:0 Mac:52:54:00:7f:ab:67 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:embed-certs-106982 Clientid:01:52:54:00:7f:ab:67}
	I1001 20:26:20.825205   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined IP address 192.168.39.203 and MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.825425   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHPort
	I1001 20:26:20.825577   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHKeyPath
	I1001 20:26:20.825697   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHUsername
	I1001 20:26:20.825835   65263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/embed-certs-106982/id_rsa Username:docker}
	I1001 20:26:20.984756   65263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:26:21.014051   65263 node_ready.go:35] waiting up to 6m0s for node "embed-certs-106982" to be "Ready" ...
	I1001 20:26:21.023227   65263 node_ready.go:49] node "embed-certs-106982" has status "Ready":"True"
	I1001 20:26:21.023274   65263 node_ready.go:38] duration metric: took 9.170523ms for node "embed-certs-106982" to be "Ready" ...
	I1001 20:26:21.023286   65263 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:26:21.029371   65263 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:21.113480   65263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1001 20:26:21.113509   65263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1001 20:26:21.138000   65263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1001 20:26:21.138028   65263 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1001 20:26:21.162057   65263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:26:21.240772   65263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 20:26:21.251310   65263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 20:26:21.251337   65263 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1001 20:26:21.316994   65263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 20:26:22.282775   65263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.041963655s)
	I1001 20:26:22.282809   65263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.120713974s)
	I1001 20:26:22.282835   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.282849   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.282849   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.282864   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.283226   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.283243   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.283256   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.283265   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.283244   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.283298   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.283311   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.283275   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.283278   65263 main.go:141] libmachine: (embed-certs-106982) DBG | Closing plugin on server side
	I1001 20:26:22.283808   65263 main.go:141] libmachine: (embed-certs-106982) DBG | Closing plugin on server side
	I1001 20:26:22.283808   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.283839   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.283892   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.283907   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.342382   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.342407   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.342708   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.342732   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.434882   65263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.117844425s)
	I1001 20:26:22.434937   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.434950   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.435276   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.435291   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.435301   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.435309   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.435554   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.435582   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.435593   65263 addons.go:475] Verifying addon metrics-server=true in "embed-certs-106982"
	I1001 20:26:22.437796   65263 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1001 20:26:22.438856   65263 addons.go:510] duration metric: took 1.678119807s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1001 20:26:21.492616   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:22.263077   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:24.761931   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:23.036676   65263 pod_ready.go:103] pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:25.537836   65263 pod_ready.go:103] pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:26.536827   65263 pod_ready.go:93] pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:26.536853   65263 pod_ready.go:82] duration metric: took 5.507455172s for pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:26.536865   65263 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wfdwp" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:26.541397   65263 pod_ready.go:93] pod "coredns-7c65d6cfc9-wfdwp" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:26.541427   65263 pod_ready.go:82] duration metric: took 4.554335ms for pod "coredns-7c65d6cfc9-wfdwp" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:26.541436   65263 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.048586   65263 pod_ready.go:93] pod "etcd-embed-certs-106982" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.048612   65263 pod_ready.go:82] duration metric: took 507.170207ms for pod "etcd-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.048622   65263 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.053967   65263 pod_ready.go:93] pod "kube-apiserver-embed-certs-106982" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.053994   65263 pod_ready.go:82] duration metric: took 5.365871ms for pod "kube-apiserver-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.054007   65263 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.059419   65263 pod_ready.go:93] pod "kube-controller-manager-embed-certs-106982" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.059441   65263 pod_ready.go:82] duration metric: took 5.427863ms for pod "kube-controller-manager-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.059452   65263 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fjnvc" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.333488   65263 pod_ready.go:93] pod "kube-proxy-fjnvc" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.333512   65263 pod_ready.go:82] duration metric: took 274.054021ms for pod "kube-proxy-fjnvc" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.333521   65263 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.733368   65263 pod_ready.go:93] pod "kube-scheduler-embed-certs-106982" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.733392   65263 pod_ready.go:82] duration metric: took 399.861423ms for pod "kube-scheduler-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.733400   65263 pod_ready.go:39] duration metric: took 6.710101442s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:26:27.733422   65263 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:26:27.733476   65263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:26:27.750336   65263 api_server.go:72] duration metric: took 6.989620923s to wait for apiserver process to appear ...
	I1001 20:26:27.750367   65263 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:26:27.750389   65263 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I1001 20:26:27.755350   65263 api_server.go:279] https://192.168.39.203:8443/healthz returned 200:
	ok
	I1001 20:26:27.756547   65263 api_server.go:141] control plane version: v1.31.1
	I1001 20:26:27.756572   65263 api_server.go:131] duration metric: took 6.196295ms to wait for apiserver health ...
	I1001 20:26:27.756583   65263 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:26:27.937329   65263 system_pods.go:59] 9 kube-system pods found
	I1001 20:26:27.937364   65263 system_pods.go:61] "coredns-7c65d6cfc9-rq5ms" [652fcc3d-ae12-4e11-b212-8891c1c05701] Running
	I1001 20:26:27.937373   65263 system_pods.go:61] "coredns-7c65d6cfc9-wfdwp" [1174cd48-6855-4813-9ecd-3b3a82386720] Running
	I1001 20:26:27.937380   65263 system_pods.go:61] "etcd-embed-certs-106982" [84d678ad-7322-48d0-8bab-6c683d3cf8a5] Running
	I1001 20:26:27.937386   65263 system_pods.go:61] "kube-apiserver-embed-certs-106982" [93d7fba8-306f-4b04-b65b-e3d4442f9ba6] Running
	I1001 20:26:27.937392   65263 system_pods.go:61] "kube-controller-manager-embed-certs-106982" [5e405af0-a942-4040-a955-8a007c2fc6e9] Running
	I1001 20:26:27.937396   65263 system_pods.go:61] "kube-proxy-fjnvc" [728b1b90-5961-45e9-9818-8fc6f6db1634] Running
	I1001 20:26:27.937402   65263 system_pods.go:61] "kube-scheduler-embed-certs-106982" [c0289891-9235-44de-a3cb-669648f5c18e] Running
	I1001 20:26:27.937416   65263 system_pods.go:61] "metrics-server-6867b74b74-z27sl" [dd1b4cdc-5ffb-4214-96c9-569ef4f7ba09] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:26:27.937427   65263 system_pods.go:61] "storage-provisioner" [3aaab1f2-8361-46c6-88be-ed9004628715] Running
	I1001 20:26:27.937441   65263 system_pods.go:74] duration metric: took 180.849735ms to wait for pod list to return data ...
	I1001 20:26:27.937453   65263 default_sa.go:34] waiting for default service account to be created ...
	I1001 20:26:28.133918   65263 default_sa.go:45] found service account: "default"
	I1001 20:26:28.133945   65263 default_sa.go:55] duration metric: took 196.482206ms for default service account to be created ...
	I1001 20:26:28.133955   65263 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 20:26:28.335883   65263 system_pods.go:86] 9 kube-system pods found
	I1001 20:26:28.335916   65263 system_pods.go:89] "coredns-7c65d6cfc9-rq5ms" [652fcc3d-ae12-4e11-b212-8891c1c05701] Running
	I1001 20:26:28.335923   65263 system_pods.go:89] "coredns-7c65d6cfc9-wfdwp" [1174cd48-6855-4813-9ecd-3b3a82386720] Running
	I1001 20:26:28.335927   65263 system_pods.go:89] "etcd-embed-certs-106982" [84d678ad-7322-48d0-8bab-6c683d3cf8a5] Running
	I1001 20:26:28.335931   65263 system_pods.go:89] "kube-apiserver-embed-certs-106982" [93d7fba8-306f-4b04-b65b-e3d4442f9ba6] Running
	I1001 20:26:28.335935   65263 system_pods.go:89] "kube-controller-manager-embed-certs-106982" [5e405af0-a942-4040-a955-8a007c2fc6e9] Running
	I1001 20:26:28.335939   65263 system_pods.go:89] "kube-proxy-fjnvc" [728b1b90-5961-45e9-9818-8fc6f6db1634] Running
	I1001 20:26:28.335942   65263 system_pods.go:89] "kube-scheduler-embed-certs-106982" [c0289891-9235-44de-a3cb-669648f5c18e] Running
	I1001 20:26:28.335947   65263 system_pods.go:89] "metrics-server-6867b74b74-z27sl" [dd1b4cdc-5ffb-4214-96c9-569ef4f7ba09] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:26:28.335951   65263 system_pods.go:89] "storage-provisioner" [3aaab1f2-8361-46c6-88be-ed9004628715] Running
	I1001 20:26:28.335959   65263 system_pods.go:126] duration metric: took 202.000148ms to wait for k8s-apps to be running ...
	I1001 20:26:28.335967   65263 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 20:26:28.336013   65263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:26:28.350578   65263 system_svc.go:56] duration metric: took 14.603568ms WaitForService to wait for kubelet
	I1001 20:26:28.350608   65263 kubeadm.go:582] duration metric: took 7.589898283s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:26:28.350630   65263 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:26:28.533508   65263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:26:28.533533   65263 node_conditions.go:123] node cpu capacity is 2
	I1001 20:26:28.533544   65263 node_conditions.go:105] duration metric: took 182.908473ms to run NodePressure ...
	I1001 20:26:28.533554   65263 start.go:241] waiting for startup goroutines ...
	I1001 20:26:28.533561   65263 start.go:246] waiting for cluster config update ...
	I1001 20:26:28.533571   65263 start.go:255] writing updated cluster config ...
	I1001 20:26:28.533862   65263 ssh_runner.go:195] Run: rm -f paused
	I1001 20:26:28.580991   65263 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 20:26:28.583612   65263 out.go:177] * Done! kubectl is now configured to use "embed-certs-106982" cluster and "default" namespace by default
	I1001 20:26:27.572585   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:30.648588   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:27.262297   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:29.761795   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:31.762340   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:34.261713   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:35.263742   64676 pod_ready.go:82] duration metric: took 4m0.008218565s for pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace to be "Ready" ...
	E1001 20:26:35.263766   64676 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1001 20:26:35.263774   64676 pod_ready.go:39] duration metric: took 4m6.044360969s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:26:35.263791   64676 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:26:35.263820   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:26:35.263879   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:26:35.314427   64676 cri.go:89] found id: "a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:35.314450   64676 cri.go:89] found id: ""
	I1001 20:26:35.314457   64676 logs.go:276] 1 containers: [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd]
	I1001 20:26:35.314510   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.319554   64676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:26:35.319627   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:26:35.352986   64676 cri.go:89] found id: "586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:35.353006   64676 cri.go:89] found id: ""
	I1001 20:26:35.353013   64676 logs.go:276] 1 containers: [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f]
	I1001 20:26:35.353061   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.356979   64676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:26:35.357044   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:26:35.397175   64676 cri.go:89] found id: "4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:35.397196   64676 cri.go:89] found id: ""
	I1001 20:26:35.397203   64676 logs.go:276] 1 containers: [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6]
	I1001 20:26:35.397250   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.401025   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:26:35.401108   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:26:35.434312   64676 cri.go:89] found id: "89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:35.434333   64676 cri.go:89] found id: ""
	I1001 20:26:35.434340   64676 logs.go:276] 1 containers: [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d]
	I1001 20:26:35.434400   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.438325   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:26:35.438385   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:26:35.480711   64676 cri.go:89] found id: "fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:35.480738   64676 cri.go:89] found id: ""
	I1001 20:26:35.480750   64676 logs.go:276] 1 containers: [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8]
	I1001 20:26:35.480795   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.484996   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:26:35.485073   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:26:35.524876   64676 cri.go:89] found id: "69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:35.524909   64676 cri.go:89] found id: ""
	I1001 20:26:35.524920   64676 logs.go:276] 1 containers: [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf]
	I1001 20:26:35.524984   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.529297   64676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:26:35.529366   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:26:35.564110   64676 cri.go:89] found id: ""
	I1001 20:26:35.564138   64676 logs.go:276] 0 containers: []
	W1001 20:26:35.564149   64676 logs.go:278] No container was found matching "kindnet"
	I1001 20:26:35.564157   64676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1001 20:26:35.564222   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1001 20:26:35.599279   64676 cri.go:89] found id: "a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:35.599311   64676 cri.go:89] found id: "652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:35.599318   64676 cri.go:89] found id: ""
	I1001 20:26:35.599327   64676 logs.go:276] 2 containers: [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b]
	I1001 20:26:35.599379   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.603377   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.607668   64676 logs.go:123] Gathering logs for kubelet ...
	I1001 20:26:35.607698   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:26:35.678017   64676 logs.go:123] Gathering logs for coredns [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6] ...
	I1001 20:26:35.678053   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:35.717814   64676 logs.go:123] Gathering logs for kube-proxy [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8] ...
	I1001 20:26:35.717842   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:35.752647   64676 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:26:35.752680   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:26:36.259582   64676 logs.go:123] Gathering logs for container status ...
	I1001 20:26:36.259630   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:26:36.299857   64676 logs.go:123] Gathering logs for storage-provisioner [652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b] ...
	I1001 20:26:36.299892   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:36.339923   64676 logs.go:123] Gathering logs for dmesg ...
	I1001 20:26:36.339973   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:26:36.353728   64676 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:26:36.353763   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 20:26:36.728608   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:39.796591   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:36.482029   64676 logs.go:123] Gathering logs for kube-apiserver [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd] ...
	I1001 20:26:36.482071   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:36.525705   64676 logs.go:123] Gathering logs for etcd [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f] ...
	I1001 20:26:36.525741   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:36.566494   64676 logs.go:123] Gathering logs for kube-scheduler [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d] ...
	I1001 20:26:36.566529   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:36.602489   64676 logs.go:123] Gathering logs for kube-controller-manager [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf] ...
	I1001 20:26:36.602523   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:36.666726   64676 logs.go:123] Gathering logs for storage-provisioner [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa] ...
	I1001 20:26:36.666757   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:39.203217   64676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:26:39.220220   64676 api_server.go:72] duration metric: took 4m17.274155342s to wait for apiserver process to appear ...
	I1001 20:26:39.220253   64676 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:26:39.220301   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:26:39.220372   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:26:39.261710   64676 cri.go:89] found id: "a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:39.261739   64676 cri.go:89] found id: ""
	I1001 20:26:39.261749   64676 logs.go:276] 1 containers: [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd]
	I1001 20:26:39.261804   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.265994   64676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:26:39.266057   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:26:39.298615   64676 cri.go:89] found id: "586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:39.298642   64676 cri.go:89] found id: ""
	I1001 20:26:39.298650   64676 logs.go:276] 1 containers: [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f]
	I1001 20:26:39.298694   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.302584   64676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:26:39.302647   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:26:39.338062   64676 cri.go:89] found id: "4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:39.338091   64676 cri.go:89] found id: ""
	I1001 20:26:39.338102   64676 logs.go:276] 1 containers: [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6]
	I1001 20:26:39.338157   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.342553   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:26:39.342613   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:26:39.379787   64676 cri.go:89] found id: "89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:39.379818   64676 cri.go:89] found id: ""
	I1001 20:26:39.379828   64676 logs.go:276] 1 containers: [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d]
	I1001 20:26:39.379885   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.384397   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:26:39.384454   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:26:39.419175   64676 cri.go:89] found id: "fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:39.419204   64676 cri.go:89] found id: ""
	I1001 20:26:39.419215   64676 logs.go:276] 1 containers: [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8]
	I1001 20:26:39.419275   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.423113   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:26:39.423184   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:26:39.455948   64676 cri.go:89] found id: "69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:39.455974   64676 cri.go:89] found id: ""
	I1001 20:26:39.455984   64676 logs.go:276] 1 containers: [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf]
	I1001 20:26:39.456040   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.459912   64676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:26:39.459978   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:26:39.504152   64676 cri.go:89] found id: ""
	I1001 20:26:39.504179   64676 logs.go:276] 0 containers: []
	W1001 20:26:39.504187   64676 logs.go:278] No container was found matching "kindnet"
	I1001 20:26:39.504192   64676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1001 20:26:39.504241   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1001 20:26:39.538918   64676 cri.go:89] found id: "a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:39.538940   64676 cri.go:89] found id: "652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:39.538947   64676 cri.go:89] found id: ""
	I1001 20:26:39.538957   64676 logs.go:276] 2 containers: [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b]
	I1001 20:26:39.539013   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.542832   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.546365   64676 logs.go:123] Gathering logs for storage-provisioner [652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b] ...
	I1001 20:26:39.546395   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:39.589286   64676 logs.go:123] Gathering logs for kubelet ...
	I1001 20:26:39.589320   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:26:39.657412   64676 logs.go:123] Gathering logs for dmesg ...
	I1001 20:26:39.657447   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:26:39.671553   64676 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:26:39.671581   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 20:26:39.786194   64676 logs.go:123] Gathering logs for kube-apiserver [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd] ...
	I1001 20:26:39.786226   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:39.829798   64676 logs.go:123] Gathering logs for coredns [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6] ...
	I1001 20:26:39.829831   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:39.865854   64676 logs.go:123] Gathering logs for kube-controller-manager [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf] ...
	I1001 20:26:39.865890   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:39.920702   64676 logs.go:123] Gathering logs for storage-provisioner [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa] ...
	I1001 20:26:39.920735   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:39.959343   64676 logs.go:123] Gathering logs for etcd [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f] ...
	I1001 20:26:39.959375   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:40.001320   64676 logs.go:123] Gathering logs for kube-scheduler [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d] ...
	I1001 20:26:40.001354   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:40.037182   64676 logs.go:123] Gathering logs for kube-proxy [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8] ...
	I1001 20:26:40.037214   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:40.070072   64676 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:26:40.070098   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:26:40.492733   64676 logs.go:123] Gathering logs for container status ...
	I1001 20:26:40.492770   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:26:43.042801   64676 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I1001 20:26:43.048223   64676 api_server.go:279] https://192.168.61.93:8443/healthz returned 200:
	ok
	I1001 20:26:43.049199   64676 api_server.go:141] control plane version: v1.31.1
	I1001 20:26:43.049229   64676 api_server.go:131] duration metric: took 3.828968104s to wait for apiserver health ...
	I1001 20:26:43.049239   64676 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:26:43.049267   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:26:43.049331   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:26:43.087098   64676 cri.go:89] found id: "a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:43.087132   64676 cri.go:89] found id: ""
	I1001 20:26:43.087144   64676 logs.go:276] 1 containers: [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd]
	I1001 20:26:43.087206   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.091606   64676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:26:43.091665   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:26:43.127154   64676 cri.go:89] found id: "586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:43.127177   64676 cri.go:89] found id: ""
	I1001 20:26:43.127184   64676 logs.go:276] 1 containers: [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f]
	I1001 20:26:43.127227   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.131246   64676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:26:43.131320   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:26:43.165473   64676 cri.go:89] found id: "4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:43.165503   64676 cri.go:89] found id: ""
	I1001 20:26:43.165514   64676 logs.go:276] 1 containers: [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6]
	I1001 20:26:43.165577   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.169908   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:26:43.169982   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:26:43.210196   64676 cri.go:89] found id: "89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:43.210225   64676 cri.go:89] found id: ""
	I1001 20:26:43.210235   64676 logs.go:276] 1 containers: [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d]
	I1001 20:26:43.210302   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.214253   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:26:43.214317   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:26:43.249533   64676 cri.go:89] found id: "fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:43.249555   64676 cri.go:89] found id: ""
	I1001 20:26:43.249563   64676 logs.go:276] 1 containers: [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8]
	I1001 20:26:43.249625   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.253555   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:26:43.253633   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:26:43.294711   64676 cri.go:89] found id: "69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:43.294734   64676 cri.go:89] found id: ""
	I1001 20:26:43.294742   64676 logs.go:276] 1 containers: [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf]
	I1001 20:26:43.294787   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.298960   64676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:26:43.299037   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:26:43.339542   64676 cri.go:89] found id: ""
	I1001 20:26:43.339572   64676 logs.go:276] 0 containers: []
	W1001 20:26:43.339582   64676 logs.go:278] No container was found matching "kindnet"
	I1001 20:26:43.339588   64676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1001 20:26:43.339667   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1001 20:26:43.382206   64676 cri.go:89] found id: "a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:43.382230   64676 cri.go:89] found id: "652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:43.382234   64676 cri.go:89] found id: ""
	I1001 20:26:43.382241   64676 logs.go:276] 2 containers: [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b]
	I1001 20:26:43.382289   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.386473   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.390146   64676 logs.go:123] Gathering logs for kubelet ...
	I1001 20:26:43.390172   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:26:43.457659   64676 logs.go:123] Gathering logs for dmesg ...
	I1001 20:26:43.457699   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:26:43.471078   64676 logs.go:123] Gathering logs for kube-apiserver [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd] ...
	I1001 20:26:43.471109   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:43.518058   64676 logs.go:123] Gathering logs for etcd [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f] ...
	I1001 20:26:43.518093   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:43.559757   64676 logs.go:123] Gathering logs for kube-proxy [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8] ...
	I1001 20:26:43.559788   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:43.595485   64676 logs.go:123] Gathering logs for storage-provisioner [652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b] ...
	I1001 20:26:43.595513   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:43.628167   64676 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:26:43.628195   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 20:26:43.741206   64676 logs.go:123] Gathering logs for coredns [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6] ...
	I1001 20:26:43.741234   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:43.777220   64676 logs.go:123] Gathering logs for kube-scheduler [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d] ...
	I1001 20:26:43.777248   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:43.817507   64676 logs.go:123] Gathering logs for kube-controller-manager [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf] ...
	I1001 20:26:43.817536   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:43.880127   64676 logs.go:123] Gathering logs for storage-provisioner [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa] ...
	I1001 20:26:43.880161   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:43.915172   64676 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:26:43.915199   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:26:44.289237   64676 logs.go:123] Gathering logs for container status ...
	I1001 20:26:44.289277   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:26:46.835363   64676 system_pods.go:59] 8 kube-system pods found
	I1001 20:26:46.835393   64676 system_pods.go:61] "coredns-7c65d6cfc9-g8jf8" [7fbddef1-a564-4ee8-ab53-ae838d0fd984] Running
	I1001 20:26:46.835398   64676 system_pods.go:61] "etcd-no-preload-262337" [086d7949-d20d-49d8-871d-a464de60e4cb] Running
	I1001 20:26:46.835402   64676 system_pods.go:61] "kube-apiserver-no-preload-262337" [d8473136-4e07-43e2-bd20-65232e2d5102] Running
	I1001 20:26:46.835405   64676 system_pods.go:61] "kube-controller-manager-no-preload-262337" [63c7d071-20cd-48c5-b410-b78e339b0731] Running
	I1001 20:26:46.835408   64676 system_pods.go:61] "kube-proxy-7rrkn" [e25a055c-0203-4fe7-8801-560b9cdb27bb] Running
	I1001 20:26:46.835412   64676 system_pods.go:61] "kube-scheduler-no-preload-262337" [3b962e64-eea6-4c24-a230-32c40106a4dd] Running
	I1001 20:26:46.835418   64676 system_pods.go:61] "metrics-server-6867b74b74-2rpwt" [235515ab-28fc-437b-983a-243f7a8fb183] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:26:46.835422   64676 system_pods.go:61] "storage-provisioner" [8832193a-39b4-49b9-b943-3241bb27fb8d] Running
	I1001 20:26:46.835431   64676 system_pods.go:74] duration metric: took 3.786183909s to wait for pod list to return data ...
	I1001 20:26:46.835441   64676 default_sa.go:34] waiting for default service account to be created ...
	I1001 20:26:46.838345   64676 default_sa.go:45] found service account: "default"
	I1001 20:26:46.838367   64676 default_sa.go:55] duration metric: took 2.918089ms for default service account to be created ...
	I1001 20:26:46.838375   64676 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 20:26:46.844822   64676 system_pods.go:86] 8 kube-system pods found
	I1001 20:26:46.844850   64676 system_pods.go:89] "coredns-7c65d6cfc9-g8jf8" [7fbddef1-a564-4ee8-ab53-ae838d0fd984] Running
	I1001 20:26:46.844856   64676 system_pods.go:89] "etcd-no-preload-262337" [086d7949-d20d-49d8-871d-a464de60e4cb] Running
	I1001 20:26:46.844860   64676 system_pods.go:89] "kube-apiserver-no-preload-262337" [d8473136-4e07-43e2-bd20-65232e2d5102] Running
	I1001 20:26:46.844863   64676 system_pods.go:89] "kube-controller-manager-no-preload-262337" [63c7d071-20cd-48c5-b410-b78e339b0731] Running
	I1001 20:26:46.844867   64676 system_pods.go:89] "kube-proxy-7rrkn" [e25a055c-0203-4fe7-8801-560b9cdb27bb] Running
	I1001 20:26:46.844870   64676 system_pods.go:89] "kube-scheduler-no-preload-262337" [3b962e64-eea6-4c24-a230-32c40106a4dd] Running
	I1001 20:26:46.844876   64676 system_pods.go:89] "metrics-server-6867b74b74-2rpwt" [235515ab-28fc-437b-983a-243f7a8fb183] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:26:46.844881   64676 system_pods.go:89] "storage-provisioner" [8832193a-39b4-49b9-b943-3241bb27fb8d] Running
	I1001 20:26:46.844889   64676 system_pods.go:126] duration metric: took 6.508902ms to wait for k8s-apps to be running ...
	I1001 20:26:46.844895   64676 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 20:26:46.844934   64676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:26:46.861543   64676 system_svc.go:56] duration metric: took 16.63712ms WaitForService to wait for kubelet
	I1001 20:26:46.861586   64676 kubeadm.go:582] duration metric: took 4m24.915538002s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:26:46.861614   64676 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:26:46.864599   64676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:26:46.864632   64676 node_conditions.go:123] node cpu capacity is 2
	I1001 20:26:46.864644   64676 node_conditions.go:105] duration metric: took 3.023838ms to run NodePressure ...
	I1001 20:26:46.864657   64676 start.go:241] waiting for startup goroutines ...
	I1001 20:26:46.864667   64676 start.go:246] waiting for cluster config update ...
	I1001 20:26:46.864682   64676 start.go:255] writing updated cluster config ...
	I1001 20:26:46.864960   64676 ssh_runner.go:195] Run: rm -f paused
	I1001 20:26:46.924982   64676 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 20:26:46.926817   64676 out.go:177] * Done! kubectl is now configured to use "no-preload-262337" cluster and "default" namespace by default
	I1001 20:26:45.880599   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:48.948631   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:55.028660   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:58.100570   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:04.180661   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:07.252656   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:13.332644   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:16.404640   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:22.484714   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:25.556606   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:31.636609   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:34.712725   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:40.788632   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:43.940129   65592 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1001 20:27:43.940232   65592 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1001 20:27:43.942002   65592 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1001 20:27:43.942068   65592 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:27:43.942170   65592 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:27:43.942281   65592 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:27:43.942421   65592 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 20:27:43.942518   65592 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:27:43.944271   65592 out.go:235]   - Generating certificates and keys ...
	I1001 20:27:43.944389   65592 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:27:43.944486   65592 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:27:43.944600   65592 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 20:27:43.944693   65592 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 20:27:43.944797   65592 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 20:27:43.944888   65592 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 20:27:43.944985   65592 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 20:27:43.945072   65592 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 20:27:43.945190   65592 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 20:27:43.945301   65592 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 20:27:43.945361   65592 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 20:27:43.945420   65592 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:27:43.945467   65592 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:27:43.945515   65592 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:27:43.945585   65592 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:27:43.945651   65592 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:27:43.945772   65592 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:27:43.945899   65592 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:27:43.945961   65592 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:27:43.946057   65592 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:27:43.860704   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:43.947517   65592 out.go:235]   - Booting up control plane ...
	I1001 20:27:43.947644   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:27:43.947767   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:27:43.947861   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:27:43.947978   65592 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:27:43.948185   65592 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 20:27:43.948258   65592 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1001 20:27:43.948396   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.948618   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.948695   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.948930   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.948991   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.949149   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.949232   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.949380   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.949439   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.949597   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.949616   65592 kubeadm.go:310] 
	I1001 20:27:43.949658   65592 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1001 20:27:43.949693   65592 kubeadm.go:310] 		timed out waiting for the condition
	I1001 20:27:43.949704   65592 kubeadm.go:310] 
	I1001 20:27:43.949737   65592 kubeadm.go:310] 	This error is likely caused by:
	I1001 20:27:43.949766   65592 kubeadm.go:310] 		- The kubelet is not running
	I1001 20:27:43.949863   65592 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1001 20:27:43.949871   65592 kubeadm.go:310] 
	I1001 20:27:43.949968   65592 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1001 20:27:43.950000   65592 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1001 20:27:43.950034   65592 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1001 20:27:43.950040   65592 kubeadm.go:310] 
	I1001 20:27:43.950136   65592 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1001 20:27:43.950207   65592 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1001 20:27:43.950213   65592 kubeadm.go:310] 
	I1001 20:27:43.950310   65592 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1001 20:27:43.950389   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1001 20:27:43.950454   65592 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1001 20:27:43.950533   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1001 20:27:43.950566   65592 kubeadm.go:310] 
	W1001 20:27:43.950665   65592 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1001 20:27:43.950707   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 20:27:44.404995   65592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:27:44.421130   65592 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:27:44.431204   65592 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:27:44.431228   65592 kubeadm.go:157] found existing configuration files:
	
	I1001 20:27:44.431270   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:27:44.440792   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:27:44.440857   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:27:44.450469   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:27:44.459640   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:27:44.459695   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:27:44.469335   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:27:44.478848   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:27:44.478904   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:27:44.489162   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:27:44.501070   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:27:44.501157   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:27:44.511970   65592 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:27:44.728685   65592 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:27:49.940611   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:53.016657   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:59.092700   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:02.164611   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:08.244707   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:11.316686   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:17.400607   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:20.468660   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:26.548687   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:29.624608   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:35.700638   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:38.772693   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:44.852721   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:47.924690   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:54.004674   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:57.080644   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:03.156750   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:06.232700   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:12.308749   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:15.380633   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:18.381649   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:29:18.381689   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetMachineName
	I1001 20:29:18.382037   68418 buildroot.go:166] provisioning hostname "default-k8s-diff-port-878552"
	I1001 20:29:18.382063   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetMachineName
	I1001 20:29:18.382291   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:18.384714   68418 machine.go:96] duration metric: took 4m37.419094583s to provisionDockerMachine
	I1001 20:29:18.384772   68418 fix.go:56] duration metric: took 4m37.442164125s for fixHost
	I1001 20:29:18.384782   68418 start.go:83] releasing machines lock for "default-k8s-diff-port-878552", held for 4m37.442187455s
	W1001 20:29:18.384813   68418 start.go:714] error starting host: provision: host is not running
	W1001 20:29:18.384993   68418 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1001 20:29:18.385017   68418 start.go:729] Will try again in 5 seconds ...
	I1001 20:29:23.387086   68418 start.go:360] acquireMachinesLock for default-k8s-diff-port-878552: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 20:29:23.387232   68418 start.go:364] duration metric: took 101.596µs to acquireMachinesLock for "default-k8s-diff-port-878552"
	I1001 20:29:23.387273   68418 start.go:96] Skipping create...Using existing machine configuration
	I1001 20:29:23.387284   68418 fix.go:54] fixHost starting: 
	I1001 20:29:23.387645   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:29:23.387669   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:29:23.403371   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39149
	I1001 20:29:23.404008   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:29:23.404580   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:29:23.404603   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:29:23.405181   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:29:23.405410   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:23.405560   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:29:23.407563   68418 fix.go:112] recreateIfNeeded on default-k8s-diff-port-878552: state=Stopped err=<nil>
	I1001 20:29:23.407589   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	W1001 20:29:23.407771   68418 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 20:29:23.409721   68418 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-878552" ...
	I1001 20:29:23.410973   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Start
	I1001 20:29:23.411207   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Ensuring networks are active...
	I1001 20:29:23.412117   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Ensuring network default is active
	I1001 20:29:23.412576   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Ensuring network mk-default-k8s-diff-port-878552 is active
	I1001 20:29:23.412956   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Getting domain xml...
	I1001 20:29:23.413589   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Creating domain...
	I1001 20:29:24.744972   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting to get IP...
	I1001 20:29:24.746001   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:24.746641   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:24.746710   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:24.746607   69521 retry.go:31] will retry after 260.966833ms: waiting for machine to come up
	I1001 20:29:25.009284   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.009825   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.009849   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:25.009778   69521 retry.go:31] will retry after 308.10041ms: waiting for machine to come up
	I1001 20:29:25.319153   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.319717   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.319752   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:25.319652   69521 retry.go:31] will retry after 342.802984ms: waiting for machine to come up
	I1001 20:29:25.664405   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.664893   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.664920   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:25.664816   69521 retry.go:31] will retry after 397.002924ms: waiting for machine to come up
	I1001 20:29:26.063628   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:26.064235   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:26.064259   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:26.064201   69521 retry.go:31] will retry after 526.648832ms: waiting for machine to come up
	I1001 20:29:26.592834   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:26.593284   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:26.593307   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:26.593226   69521 retry.go:31] will retry after 642.569388ms: waiting for machine to come up
	I1001 20:29:27.237224   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:27.237775   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:27.237808   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:27.237714   69521 retry.go:31] will retry after 963.05932ms: waiting for machine to come up
	I1001 20:29:28.202841   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:28.203333   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:28.203363   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:28.203287   69521 retry.go:31] will retry after 1.372004234s: waiting for machine to come up
	I1001 20:29:29.577175   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:29.577678   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:29.577706   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:29.577627   69521 retry.go:31] will retry after 1.693508507s: waiting for machine to come up
	I1001 20:29:31.273758   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:31.274247   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:31.274274   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:31.274201   69521 retry.go:31] will retry after 1.793304779s: waiting for machine to come up
	I1001 20:29:33.069467   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:33.069894   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:33.069915   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:33.069861   69521 retry.go:31] will retry after 2.825253867s: waiting for machine to come up
	I1001 20:29:40.678676   65592 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1001 20:29:40.678797   65592 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1001 20:29:40.680563   65592 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1001 20:29:40.680613   65592 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:29:40.680680   65592 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:29:40.680788   65592 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:29:40.680868   65592 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 20:29:40.681030   65592 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:29:40.683042   65592 out.go:235]   - Generating certificates and keys ...
	I1001 20:29:40.683149   65592 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:29:40.683245   65592 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:29:40.683353   65592 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 20:29:40.683435   65592 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 20:29:40.683545   65592 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 20:29:40.683605   65592 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 20:29:40.683665   65592 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 20:29:40.683723   65592 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 20:29:40.683793   65592 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 20:29:40.683878   65592 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 20:29:40.683956   65592 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 20:29:40.684054   65592 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:29:40.684127   65592 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:29:40.684212   65592 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:29:40.684303   65592 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:29:40.684414   65592 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:29:40.684551   65592 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:29:40.684661   65592 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:29:40.684724   65592 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:29:40.684827   65592 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:29:35.897417   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:35.897916   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:35.897949   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:35.897862   69521 retry.go:31] will retry after 3.519866937s: waiting for machine to come up
	I1001 20:29:39.419142   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:39.419528   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:39.419554   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:39.419494   69521 retry.go:31] will retry after 3.507101438s: waiting for machine to come up
	I1001 20:29:40.686427   65592 out.go:235]   - Booting up control plane ...
	I1001 20:29:40.686534   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:29:40.686621   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:29:40.686710   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:29:40.686820   65592 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:29:40.686996   65592 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 20:29:40.687063   65592 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1001 20:29:40.687127   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.687336   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.687443   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.687674   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.687759   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.687958   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.688047   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.688212   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.688274   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.688510   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.688519   65592 kubeadm.go:310] 
	I1001 20:29:40.688566   65592 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1001 20:29:40.688610   65592 kubeadm.go:310] 		timed out waiting for the condition
	I1001 20:29:40.688617   65592 kubeadm.go:310] 
	I1001 20:29:40.688646   65592 kubeadm.go:310] 	This error is likely caused by:
	I1001 20:29:40.688680   65592 kubeadm.go:310] 		- The kubelet is not running
	I1001 20:29:40.688770   65592 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1001 20:29:40.688778   65592 kubeadm.go:310] 
	I1001 20:29:40.688882   65592 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1001 20:29:40.688937   65592 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1001 20:29:40.688986   65592 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1001 20:29:40.688996   65592 kubeadm.go:310] 
	I1001 20:29:40.689114   65592 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1001 20:29:40.689222   65592 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1001 20:29:40.689237   65592 kubeadm.go:310] 
	I1001 20:29:40.689376   65592 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1001 20:29:40.689517   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1001 20:29:40.689638   65592 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1001 20:29:40.689709   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1001 20:29:40.689786   65592 kubeadm.go:310] 
	I1001 20:29:40.689796   65592 kubeadm.go:394] duration metric: took 7m56.416911577s to StartCluster
	I1001 20:29:40.689838   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:29:40.689896   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:29:40.733027   65592 cri.go:89] found id: ""
	I1001 20:29:40.733059   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.733068   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:29:40.733073   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:29:40.733120   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:29:40.767975   65592 cri.go:89] found id: ""
	I1001 20:29:40.768010   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.768021   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:29:40.768029   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:29:40.768095   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:29:40.802624   65592 cri.go:89] found id: ""
	I1001 20:29:40.802657   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.802668   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:29:40.802676   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:29:40.802748   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:29:40.838109   65592 cri.go:89] found id: ""
	I1001 20:29:40.838142   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.838151   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:29:40.838157   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:29:40.838204   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:29:40.873083   65592 cri.go:89] found id: ""
	I1001 20:29:40.873112   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.873124   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:29:40.873131   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:29:40.873192   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:29:40.907675   65592 cri.go:89] found id: ""
	I1001 20:29:40.907705   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.907714   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:29:40.907720   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:29:40.907775   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:29:40.941641   65592 cri.go:89] found id: ""
	I1001 20:29:40.941669   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.941678   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:29:40.941691   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:29:40.941748   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:29:40.978189   65592 cri.go:89] found id: ""
	I1001 20:29:40.978216   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.978227   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:29:40.978238   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:29:40.978254   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:29:41.053798   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:29:41.053823   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:29:41.053835   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:29:41.160669   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:29:41.160715   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:29:41.218152   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:29:41.218182   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:29:41.274784   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:29:41.274821   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1001 20:29:41.288554   65592 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1001 20:29:41.288613   65592 out.go:270] * 
	W1001 20:29:41.288663   65592 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1001 20:29:41.288674   65592 out.go:270] * 
	W1001 20:29:41.289525   65592 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 20:29:41.292969   65592 out.go:201] 
	W1001 20:29:41.294238   65592 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1001 20:29:41.294278   65592 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1001 20:29:41.294297   65592 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1001 20:29:41.295783   65592 out.go:201] 
	I1001 20:29:42.929490   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:42.930036   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has current primary IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:42.930058   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Found IP for machine: 192.168.50.4
	I1001 20:29:42.930091   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Reserving static IP address...
	I1001 20:29:42.930623   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-878552", mac: "52:54:00:72:13:05", ip: "192.168.50.4"} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:42.930660   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | skip adding static IP to network mk-default-k8s-diff-port-878552 - found existing host DHCP lease matching {name: "default-k8s-diff-port-878552", mac: "52:54:00:72:13:05", ip: "192.168.50.4"}
	I1001 20:29:42.930686   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Reserved static IP address: 192.168.50.4
	I1001 20:29:42.930703   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for SSH to be available...
	I1001 20:29:42.930719   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Getting to WaitForSSH function...
	I1001 20:29:42.933472   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:42.933911   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:42.933948   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:42.934106   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Using SSH client type: external
	I1001 20:29:42.934134   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa (-rw-------)
	I1001 20:29:42.934168   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.4 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 20:29:42.934190   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | About to run SSH command:
	I1001 20:29:42.934210   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | exit 0
	I1001 20:29:43.064425   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | SSH cmd err, output: <nil>: 
	I1001 20:29:43.064821   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetConfigRaw
	I1001 20:29:43.065476   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetIP
	I1001 20:29:43.068442   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.068951   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.068982   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.069236   68418 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/config.json ...
	I1001 20:29:43.069476   68418 machine.go:93] provisionDockerMachine start ...
	I1001 20:29:43.069498   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:43.069726   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.072374   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.072720   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.072754   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.072974   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:43.073170   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.073358   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.073501   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:43.073685   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:29:43.073919   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:29:43.073946   68418 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 20:29:43.188588   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1001 20:29:43.188626   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetMachineName
	I1001 20:29:43.188887   68418 buildroot.go:166] provisioning hostname "default-k8s-diff-port-878552"
	I1001 20:29:43.188948   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetMachineName
	I1001 20:29:43.189182   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.192158   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.192550   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.192575   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.192743   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:43.192918   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.193081   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.193193   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:43.193317   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:29:43.193466   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:29:43.193478   68418 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-878552 && echo "default-k8s-diff-port-878552" | sudo tee /etc/hostname
	I1001 20:29:43.318342   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-878552
	
	I1001 20:29:43.318381   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.321205   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.321777   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.321807   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.322031   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:43.322218   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.322360   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.322515   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:43.322729   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:29:43.322907   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:29:43.322925   68418 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-878552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-878552/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-878552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 20:29:43.440839   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:29:43.440884   68418 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 20:29:43.440949   68418 buildroot.go:174] setting up certificates
	I1001 20:29:43.440966   68418 provision.go:84] configureAuth start
	I1001 20:29:43.440982   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetMachineName
	I1001 20:29:43.441238   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetIP
	I1001 20:29:43.443849   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.444223   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.444257   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.444432   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.446569   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.447004   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.447032   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.447130   68418 provision.go:143] copyHostCerts
	I1001 20:29:43.447210   68418 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 20:29:43.447224   68418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 20:29:43.447317   68418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 20:29:43.447430   68418 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 20:29:43.447442   68418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 20:29:43.447484   68418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 20:29:43.447560   68418 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 20:29:43.447570   68418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 20:29:43.447602   68418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 20:29:43.447729   68418 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-878552 san=[127.0.0.1 192.168.50.4 default-k8s-diff-port-878552 localhost minikube]
	I1001 20:29:43.597134   68418 provision.go:177] copyRemoteCerts
	I1001 20:29:43.597195   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 20:29:43.597216   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.599988   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.600379   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.600414   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.600598   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:43.600799   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.600970   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:43.601115   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:29:43.687211   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 20:29:43.714280   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1001 20:29:43.738536   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 20:29:43.764130   68418 provision.go:87] duration metric: took 323.147928ms to configureAuth
	I1001 20:29:43.764163   68418 buildroot.go:189] setting minikube options for container-runtime
	I1001 20:29:43.764353   68418 config.go:182] Loaded profile config "default-k8s-diff-port-878552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:29:43.764462   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.767588   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.767962   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.767991   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.768181   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:43.768339   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.768525   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.768665   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:43.768833   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:29:43.768994   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:29:43.769013   68418 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 20:29:43.998941   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 20:29:43.998964   68418 machine.go:96] duration metric: took 929.475626ms to provisionDockerMachine
	I1001 20:29:43.998976   68418 start.go:293] postStartSetup for "default-k8s-diff-port-878552" (driver="kvm2")
	I1001 20:29:43.998989   68418 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 20:29:43.999008   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:43.999305   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 20:29:43.999332   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:44.001854   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.002381   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:44.002433   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.002555   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:44.002787   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:44.002967   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:44.003142   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:29:44.091378   68418 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 20:29:44.096207   68418 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 20:29:44.096235   68418 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 20:29:44.096315   68418 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 20:29:44.096424   68418 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 20:29:44.096531   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 20:29:44.106232   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:29:44.130524   68418 start.go:296] duration metric: took 131.532724ms for postStartSetup
	I1001 20:29:44.130564   68418 fix.go:56] duration metric: took 20.743280839s for fixHost
	I1001 20:29:44.130589   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:44.133873   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.134285   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:44.134309   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.134509   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:44.134719   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:44.134873   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:44.135025   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:44.135172   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:29:44.135362   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:29:44.135376   68418 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 20:29:44.249136   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727814584.207146331
	
	I1001 20:29:44.249160   68418 fix.go:216] guest clock: 1727814584.207146331
	I1001 20:29:44.249189   68418 fix.go:229] Guest: 2024-10-01 20:29:44.207146331 +0000 UTC Remote: 2024-10-01 20:29:44.13056925 +0000 UTC m=+303.335525185 (delta=76.577081ms)
	I1001 20:29:44.249215   68418 fix.go:200] guest clock delta is within tolerance: 76.577081ms
	I1001 20:29:44.249220   68418 start.go:83] releasing machines lock for "default-k8s-diff-port-878552", held for 20.861972701s
	I1001 20:29:44.249238   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:44.249527   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetIP
	I1001 20:29:44.252984   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.253526   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:44.253569   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.253903   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:44.254449   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:44.254618   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:44.254680   68418 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 20:29:44.254727   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:44.254810   68418 ssh_runner.go:195] Run: cat /version.json
	I1001 20:29:44.254833   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:44.257550   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.257826   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.258077   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:44.258114   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.258363   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:44.258489   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:44.258529   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.258563   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:44.258683   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:44.258784   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:44.258832   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:44.258915   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:29:44.258965   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:44.259113   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:29:44.379049   68418 ssh_runner.go:195] Run: systemctl --version
	I1001 20:29:44.384985   68418 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 20:29:44.527579   68418 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 20:29:44.533267   68418 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 20:29:44.533357   68418 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 20:29:44.552308   68418 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 20:29:44.552333   68418 start.go:495] detecting cgroup driver to use...
	I1001 20:29:44.552421   68418 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 20:29:44.573762   68418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 20:29:44.588010   68418 docker.go:217] disabling cri-docker service (if available) ...
	I1001 20:29:44.588063   68418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 20:29:44.602369   68418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 20:29:44.618754   68418 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 20:29:44.757380   68418 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 20:29:44.941718   68418 docker.go:233] disabling docker service ...
	I1001 20:29:44.941790   68418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 20:29:44.957306   68418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 20:29:44.971723   68418 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 20:29:45.094124   68418 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 20:29:45.220645   68418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 20:29:45.236217   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 20:29:45.255752   68418 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 20:29:45.255820   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.266327   68418 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 20:29:45.266398   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.276964   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.288013   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.298669   68418 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 20:29:45.309693   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.320041   68418 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.336621   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.346862   68418 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 20:29:45.357656   68418 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 20:29:45.357717   68418 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 20:29:45.372693   68418 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 20:29:45.383796   68418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:29:45.524957   68418 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 20:29:45.611630   68418 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 20:29:45.611702   68418 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 20:29:45.616520   68418 start.go:563] Will wait 60s for crictl version
	I1001 20:29:45.616587   68418 ssh_runner.go:195] Run: which crictl
	I1001 20:29:45.620321   68418 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 20:29:45.661806   68418 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 20:29:45.661890   68418 ssh_runner.go:195] Run: crio --version
	I1001 20:29:45.690843   68418 ssh_runner.go:195] Run: crio --version
	I1001 20:29:45.720183   68418 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 20:29:45.721659   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetIP
	I1001 20:29:45.724986   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:45.725349   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:45.725376   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:45.725583   68418 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1001 20:29:45.729522   68418 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:29:45.741877   68418 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-878552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:default-k8s-diff-port-878552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.4 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 20:29:45.742008   68418 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 20:29:45.742051   68418 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:29:45.779002   68418 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1001 20:29:45.779081   68418 ssh_runner.go:195] Run: which lz4
	I1001 20:29:45.782751   68418 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 20:29:45.786704   68418 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 20:29:45.786733   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1001 20:29:47.072431   68418 crio.go:462] duration metric: took 1.289701438s to copy over tarball
	I1001 20:29:47.072508   68418 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 20:29:49.166576   68418 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.094040254s)
	I1001 20:29:49.166604   68418 crio.go:469] duration metric: took 2.094143226s to extract the tarball
	I1001 20:29:49.166613   68418 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 20:29:49.203988   68418 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:29:49.250464   68418 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 20:29:49.250490   68418 cache_images.go:84] Images are preloaded, skipping loading
	I1001 20:29:49.250499   68418 kubeadm.go:934] updating node { 192.168.50.4 8444 v1.31.1 crio true true} ...
	I1001 20:29:49.250612   68418 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-878552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-878552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 20:29:49.250697   68418 ssh_runner.go:195] Run: crio config
	I1001 20:29:49.298003   68418 cni.go:84] Creating CNI manager for ""
	I1001 20:29:49.298024   68418 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:29:49.298032   68418 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 20:29:49.298055   68418 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.4 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-878552 NodeName:default-k8s-diff-port-878552 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 20:29:49.298183   68418 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.4
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-878552"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 20:29:49.298253   68418 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 20:29:49.308945   68418 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 20:29:49.309011   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 20:29:49.319017   68418 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1001 20:29:49.335588   68418 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 20:29:49.351598   68418 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I1001 20:29:49.369172   68418 ssh_runner.go:195] Run: grep 192.168.50.4	control-plane.minikube.internal$ /etc/hosts
	I1001 20:29:49.372755   68418 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.4	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:29:49.385529   68418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:29:49.509676   68418 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:29:49.527149   68418 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552 for IP: 192.168.50.4
	I1001 20:29:49.527170   68418 certs.go:194] generating shared ca certs ...
	I1001 20:29:49.527185   68418 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:29:49.527321   68418 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 20:29:49.527368   68418 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 20:29:49.527378   68418 certs.go:256] generating profile certs ...
	I1001 20:29:49.527456   68418 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/client.key
	I1001 20:29:49.527514   68418 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/apiserver.key.7bbee9b6
	I1001 20:29:49.527555   68418 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/proxy-client.key
	I1001 20:29:49.527668   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 20:29:49.527707   68418 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 20:29:49.527735   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 20:29:49.527772   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 20:29:49.527811   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 20:29:49.527848   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 20:29:49.527907   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:29:49.529210   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 20:29:49.577743   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 20:29:49.617960   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 20:29:49.659543   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 20:29:49.709464   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1001 20:29:49.734308   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 20:29:49.759576   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 20:29:49.784416   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 20:29:49.809150   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 20:29:49.833580   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 20:29:49.857628   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 20:29:49.880924   68418 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 20:29:49.897478   68418 ssh_runner.go:195] Run: openssl version
	I1001 20:29:49.903488   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 20:29:49.914490   68418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:29:49.919105   68418 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:29:49.919165   68418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:29:49.925133   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 20:29:49.936294   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 20:29:49.946630   68418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 20:29:49.951255   68418 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 20:29:49.951308   68418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 20:29:49.957277   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 20:29:49.971166   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 20:29:49.982558   68418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 20:29:49.986947   68418 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 20:29:49.987003   68418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 20:29:49.992569   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 20:29:50.002922   68418 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 20:29:50.007707   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 20:29:50.013717   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 20:29:50.020166   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 20:29:50.026795   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 20:29:50.033544   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 20:29:50.039686   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 20:29:50.045837   68418 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-878552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:default-k8s-diff-port-878552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.4 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:29:50.045971   68418 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 20:29:50.046025   68418 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 20:29:50.086925   68418 cri.go:89] found id: ""
	I1001 20:29:50.086999   68418 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 20:29:50.097130   68418 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1001 20:29:50.097167   68418 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1001 20:29:50.097222   68418 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1001 20:29:50.108298   68418 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1001 20:29:50.109389   68418 kubeconfig.go:125] found "default-k8s-diff-port-878552" server: "https://192.168.50.4:8444"
	I1001 20:29:50.111587   68418 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1001 20:29:50.122158   68418 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.4
	I1001 20:29:50.122199   68418 kubeadm.go:1160] stopping kube-system containers ...
	I1001 20:29:50.122213   68418 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1001 20:29:50.122281   68418 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 20:29:50.160351   68418 cri.go:89] found id: ""
	I1001 20:29:50.160434   68418 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1001 20:29:50.178857   68418 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:29:50.190857   68418 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:29:50.190879   68418 kubeadm.go:157] found existing configuration files:
	
	I1001 20:29:50.190926   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1001 20:29:50.200391   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:29:50.200449   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:29:50.210388   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1001 20:29:50.219943   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:29:50.220007   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:29:50.229576   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1001 20:29:50.239983   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:29:50.240055   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:29:50.251062   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1001 20:29:50.261349   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:29:50.261430   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:29:50.271284   68418 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:29:50.281256   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:50.393255   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:51.469349   68418 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.076029092s)
	I1001 20:29:51.469386   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:51.683522   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:51.749545   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:51.856549   68418 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:29:51.856662   68418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:29:52.356980   68418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:29:52.857568   68418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:29:53.357123   68418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:29:53.372308   68418 api_server.go:72] duration metric: took 1.515757915s to wait for apiserver process to appear ...
	I1001 20:29:53.372341   68418 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:29:53.372387   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:53.372877   68418 api_server.go:269] stopped: https://192.168.50.4:8444/healthz: Get "https://192.168.50.4:8444/healthz": dial tcp 192.168.50.4:8444: connect: connection refused
	I1001 20:29:53.872447   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:56.591087   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1001 20:29:56.591111   68418 api_server.go:103] status: https://192.168.50.4:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1001 20:29:56.591122   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:56.668641   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1001 20:29:56.668672   68418 api_server.go:103] status: https://192.168.50.4:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1001 20:29:56.872906   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:56.882393   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 20:29:56.882433   68418 api_server.go:103] status: https://192.168.50.4:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 20:29:57.372590   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:57.377715   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 20:29:57.377745   68418 api_server.go:103] status: https://192.168.50.4:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 20:29:57.873466   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:57.879628   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 200:
	ok
	I1001 20:29:57.889478   68418 api_server.go:141] control plane version: v1.31.1
	I1001 20:29:57.889512   68418 api_server.go:131] duration metric: took 4.517163838s to wait for apiserver health ...
	I1001 20:29:57.889520   68418 cni.go:84] Creating CNI manager for ""
	I1001 20:29:57.889534   68418 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:29:57.891485   68418 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 20:29:57.892936   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 20:29:57.910485   68418 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 20:29:57.930071   68418 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:29:57.940155   68418 system_pods.go:59] 8 kube-system pods found
	I1001 20:29:57.940191   68418 system_pods.go:61] "coredns-7c65d6cfc9-cmchv" [55a0612c-d596-4799-a9f9-0b6d9361ca15] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 20:29:57.940202   68418 system_pods.go:61] "etcd-default-k8s-diff-port-878552" [bcd7c228-d83d-4eec-9a64-f33dac086dcd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1001 20:29:57.940211   68418 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-878552" [23602015-b245-4e14-a076-2e9efb0f2f66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 20:29:57.940232   68418 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-878552" [e94298d4-75e3-4fbb-b361-6e5248273355] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1001 20:29:57.940239   68418 system_pods.go:61] "kube-proxy-sxxfj" [2bd75205-221e-498e-8a80-1e7a727fd4e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1001 20:29:57.940246   68418 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-878552" [ddcacd2c-3781-46df-83f8-e6763485a55d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 20:29:57.940254   68418 system_pods.go:61] "metrics-server-6867b74b74-b62f8" [26359941-b4d3-442c-ae46-4303a2f7b5e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:29:57.940262   68418 system_pods.go:61] "storage-provisioner" [a34592b0-f9e5-465b-9d64-07cf84f0c951] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1001 20:29:57.940279   68418 system_pods.go:74] duration metric: took 10.189531ms to wait for pod list to return data ...
	I1001 20:29:57.940292   68418 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:29:57.945316   68418 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:29:57.945349   68418 node_conditions.go:123] node cpu capacity is 2
	I1001 20:29:57.945362   68418 node_conditions.go:105] duration metric: took 5.063896ms to run NodePressure ...
	I1001 20:29:57.945380   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:58.233781   68418 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1001 20:29:58.237692   68418 kubeadm.go:739] kubelet initialised
	I1001 20:29:58.237713   68418 kubeadm.go:740] duration metric: took 3.903724ms waiting for restarted kubelet to initialise ...
	I1001 20:29:58.237721   68418 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:29:58.243500   68418 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:00.249577   68418 pod_ready.go:103] pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:02.250329   68418 pod_ready.go:103] pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:04.750635   68418 pod_ready.go:103] pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:06.751559   68418 pod_ready.go:93] pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:06.751583   68418 pod_ready.go:82] duration metric: took 8.508053751s for pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:06.751594   68418 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:08.757727   68418 pod_ready.go:103] pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:10.260326   68418 pod_ready.go:93] pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:10.260352   68418 pod_ready.go:82] duration metric: took 3.508751351s for pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.260388   68418 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.267041   68418 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:10.267071   68418 pod_ready.go:82] duration metric: took 6.67429ms for pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.267083   68418 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.773135   68418 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:10.773156   68418 pod_ready.go:82] duration metric: took 506.065053ms for pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.773166   68418 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-sxxfj" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.777890   68418 pod_ready.go:93] pod "kube-proxy-sxxfj" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:10.777910   68418 pod_ready.go:82] duration metric: took 4.738315ms for pod "kube-proxy-sxxfj" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.777918   68418 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.782610   68418 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:10.782634   68418 pod_ready.go:82] duration metric: took 4.708989ms for pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.782644   68418 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:12.789050   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:15.290635   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:17.290867   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:19.789502   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:21.789999   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:24.289487   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:26.789083   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:28.789955   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:30.790439   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:33.289188   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:35.289313   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:37.289903   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:39.788459   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:41.788633   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:43.788867   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:46.290002   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:48.789891   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:51.289334   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:53.788643   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:55.789983   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:58.288949   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:00.289478   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:02.290789   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:04.789722   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:07.289474   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:09.290183   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:11.790355   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:14.289284   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:16.289536   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:18.289606   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:20.789261   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:22.789463   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:25.290185   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:27.788643   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:29.788778   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:31.790285   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:34.288230   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:36.288784   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:38.289862   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:40.789317   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:43.289232   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:45.290400   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:47.788723   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:49.789327   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:52.289114   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:54.788895   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:56.788984   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:59.288473   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:01.789415   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:04.289328   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:06.289615   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:08.788879   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:10.790191   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:13.288885   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:15.789008   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:17.789191   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:19.789559   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:22.288958   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:24.290206   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:26.788241   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:28.789457   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:31.288929   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:33.789418   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:35.789932   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:38.288742   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:40.289667   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:42.789129   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:44.790115   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:47.289310   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:49.289558   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:51.789255   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:54.289586   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:56.788032   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:58.789012   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:01.289206   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:03.788129   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:05.788915   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:07.790124   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:10.289206   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:12.789314   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:14.789636   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:17.288443   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:19.289524   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:21.289650   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:23.789802   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:26.289735   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:28.788897   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:30.789339   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:33.289295   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:35.289664   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:37.789968   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:40.289657   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:42.789430   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:45.289320   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:47.789980   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:50.287836   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:52.289028   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:54.788936   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:56.789521   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:59.289778   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:01.788398   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:03.789045   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:05.789391   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:08.289322   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:10.783748   68418 pod_ready.go:82] duration metric: took 4m0.001085136s for pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace to be "Ready" ...
	E1001 20:34:10.783784   68418 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace to be "Ready" (will not retry!)
	I1001 20:34:10.783805   68418 pod_ready.go:39] duration metric: took 4m12.546072786s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:34:10.783831   68418 kubeadm.go:597] duration metric: took 4m20.686657254s to restartPrimaryControlPlane
	W1001 20:34:10.783895   68418 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1001 20:34:10.783926   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 20:34:36.981542   68418 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.197594945s)
	I1001 20:34:36.981628   68418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:34:37.005650   68418 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:34:37.017406   68418 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:34:37.031711   68418 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:34:37.031737   68418 kubeadm.go:157] found existing configuration files:
	
	I1001 20:34:37.031801   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1001 20:34:37.054028   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:34:37.054096   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:34:37.068277   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1001 20:34:37.099472   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:34:37.099558   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:34:37.109813   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1001 20:34:37.119548   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:34:37.119620   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:34:37.129522   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1001 20:34:37.138911   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:34:37.138971   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:34:37.149119   68418 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:34:37.193177   68418 kubeadm.go:310] W1001 20:34:37.161028    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:34:37.193935   68418 kubeadm.go:310] W1001 20:34:37.161888    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:34:37.305111   68418 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:34:45.582383   68418 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 20:34:45.582463   68418 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:34:45.582540   68418 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:34:45.582643   68418 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:34:45.582725   68418 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 20:34:45.582825   68418 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:34:45.584304   68418 out.go:235]   - Generating certificates and keys ...
	I1001 20:34:45.584409   68418 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:34:45.584488   68418 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:34:45.584584   68418 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 20:34:45.584666   68418 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 20:34:45.584757   68418 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 20:34:45.584833   68418 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 20:34:45.584926   68418 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 20:34:45.585014   68418 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 20:34:45.585109   68418 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 20:34:45.585227   68418 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 20:34:45.585291   68418 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 20:34:45.585364   68418 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:34:45.585438   68418 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:34:45.585527   68418 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 20:34:45.585609   68418 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:34:45.585710   68418 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:34:45.585792   68418 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:34:45.585901   68418 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:34:45.585990   68418 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:34:45.587360   68418 out.go:235]   - Booting up control plane ...
	I1001 20:34:45.587448   68418 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:34:45.587539   68418 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:34:45.587626   68418 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:34:45.587751   68418 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:34:45.587885   68418 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:34:45.587960   68418 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:34:45.588118   68418 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 20:34:45.588256   68418 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 20:34:45.588341   68418 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002411615s
	I1001 20:34:45.588453   68418 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 20:34:45.588531   68418 kubeadm.go:310] [api-check] The API server is healthy after 5.002438287s
	I1001 20:34:45.588653   68418 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 20:34:45.588821   68418 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 20:34:45.588925   68418 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 20:34:45.589184   68418 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-878552 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 20:34:45.589272   68418 kubeadm.go:310] [bootstrap-token] Using token: p1d60n.4sgx895mi22cjpsf
	I1001 20:34:45.590444   68418 out.go:235]   - Configuring RBAC rules ...
	I1001 20:34:45.590599   68418 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 20:34:45.590726   68418 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 20:34:45.590923   68418 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 20:34:45.591071   68418 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 20:34:45.591222   68418 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 20:34:45.591301   68418 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 20:34:45.591402   68418 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 20:34:45.591441   68418 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 20:34:45.591485   68418 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 20:34:45.591492   68418 kubeadm.go:310] 
	I1001 20:34:45.591540   68418 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 20:34:45.591548   68418 kubeadm.go:310] 
	I1001 20:34:45.591614   68418 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 20:34:45.591619   68418 kubeadm.go:310] 
	I1001 20:34:45.591644   68418 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 20:34:45.591694   68418 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 20:34:45.591750   68418 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 20:34:45.591756   68418 kubeadm.go:310] 
	I1001 20:34:45.591812   68418 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 20:34:45.591818   68418 kubeadm.go:310] 
	I1001 20:34:45.591857   68418 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 20:34:45.591865   68418 kubeadm.go:310] 
	I1001 20:34:45.591909   68418 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 20:34:45.591990   68418 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 20:34:45.592063   68418 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 20:34:45.592071   68418 kubeadm.go:310] 
	I1001 20:34:45.592195   68418 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 20:34:45.592313   68418 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 20:34:45.592322   68418 kubeadm.go:310] 
	I1001 20:34:45.592432   68418 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token p1d60n.4sgx895mi22cjpsf \
	I1001 20:34:45.592579   68418 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 \
	I1001 20:34:45.592611   68418 kubeadm.go:310] 	--control-plane 
	I1001 20:34:45.592620   68418 kubeadm.go:310] 
	I1001 20:34:45.592734   68418 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 20:34:45.592743   68418 kubeadm.go:310] 
	I1001 20:34:45.592858   68418 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token p1d60n.4sgx895mi22cjpsf \
	I1001 20:34:45.592982   68418 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 
	I1001 20:34:45.592997   68418 cni.go:84] Creating CNI manager for ""
	I1001 20:34:45.593009   68418 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:34:45.594419   68418 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 20:34:45.595548   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 20:34:45.607351   68418 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 20:34:45.627315   68418 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 20:34:45.627399   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:45.627424   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-878552 minikube.k8s.io/updated_at=2024_10_01T20_34_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=default-k8s-diff-port-878552 minikube.k8s.io/primary=true
	I1001 20:34:45.843925   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:45.843977   68418 ops.go:34] apiserver oom_adj: -16
	I1001 20:34:46.344009   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:46.844786   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:47.344138   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:47.844582   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:48.344478   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:48.844802   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:49.344790   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:49.844113   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:49.980078   68418 kubeadm.go:1113] duration metric: took 4.352743528s to wait for elevateKubeSystemPrivileges
	I1001 20:34:49.980127   68418 kubeadm.go:394] duration metric: took 4m59.934297539s to StartCluster
	I1001 20:34:49.980151   68418 settings.go:142] acquiring lock: {Name:mkeb3fe64ed992373f048aef24eb0d675bcab60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:34:49.980237   68418 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:34:49.982156   68418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/kubeconfig: {Name:mk7ef19d546928001abf2478a3abe1d17765a591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:34:49.982450   68418 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.4 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 20:34:49.982531   68418 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 20:34:49.982651   68418 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-878552"
	I1001 20:34:49.982674   68418 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-878552"
	I1001 20:34:49.982673   68418 config.go:182] Loaded profile config "default-k8s-diff-port-878552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W1001 20:34:49.982682   68418 addons.go:243] addon storage-provisioner should already be in state true
	I1001 20:34:49.982722   68418 host.go:66] Checking if "default-k8s-diff-port-878552" exists ...
	I1001 20:34:49.982727   68418 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-878552"
	I1001 20:34:49.982743   68418 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-878552"
	I1001 20:34:49.982817   68418 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-878552"
	I1001 20:34:49.982861   68418 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-878552"
	W1001 20:34:49.982871   68418 addons.go:243] addon metrics-server should already be in state true
	I1001 20:34:49.982899   68418 host.go:66] Checking if "default-k8s-diff-port-878552" exists ...
	I1001 20:34:49.983158   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:49.983157   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:49.983202   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:49.983222   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:49.983301   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:49.983360   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:49.983825   68418 out.go:177] * Verifying Kubernetes components...
	I1001 20:34:49.985618   68418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:34:50.000925   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33641
	I1001 20:34:50.001031   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40311
	I1001 20:34:50.001469   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.001518   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.002031   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.002046   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.002084   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.002096   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.002510   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.002698   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.003148   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:50.003188   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:50.003432   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33083
	I1001 20:34:50.003813   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:50.003845   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.003858   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:50.004438   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.004462   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.004823   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.005017   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:34:50.009397   68418 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-878552"
	W1001 20:34:50.009420   68418 addons.go:243] addon default-storageclass should already be in state true
	I1001 20:34:50.009449   68418 host.go:66] Checking if "default-k8s-diff-port-878552" exists ...
	I1001 20:34:50.009886   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:50.009937   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:50.025234   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42543
	I1001 20:34:50.025892   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.026556   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.026583   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.027217   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.027484   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:34:50.029351   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42743
	I1001 20:34:50.029576   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:34:50.029996   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.030498   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.030520   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.030634   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45211
	I1001 20:34:50.030843   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.031078   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:34:50.031171   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.031283   68418 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:34:50.031683   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.031706   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.032061   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.032524   68418 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:34:50.032542   68418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 20:34:50.032560   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:34:50.032650   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:50.032683   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:50.033489   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:34:50.034928   68418 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1001 20:34:50.036629   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.036714   68418 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1001 20:34:50.036728   68418 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1001 20:34:50.036757   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:34:50.037000   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:34:50.037020   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.037303   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:34:50.037502   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:34:50.037697   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:34:50.037858   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:34:50.040023   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.040406   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:34:50.040428   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.040637   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:34:50.040843   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:34:50.041031   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:34:50.041156   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:34:50.050069   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34785
	I1001 20:34:50.050601   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.051079   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.051098   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.051460   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.051601   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:34:50.054072   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:34:50.054308   68418 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 20:34:50.054324   68418 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 20:34:50.054344   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:34:50.057697   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.058329   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:34:50.058386   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.058519   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:34:50.058781   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:34:50.059047   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:34:50.059192   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:34:50.228332   68418 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:34:50.245991   68418 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-878552" to be "Ready" ...
	I1001 20:34:50.255784   68418 node_ready.go:49] node "default-k8s-diff-port-878552" has status "Ready":"True"
	I1001 20:34:50.255822   68418 node_ready.go:38] duration metric: took 9.789404ms for node "default-k8s-diff-port-878552" to be "Ready" ...
	I1001 20:34:50.255836   68418 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:34:50.262258   68418 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8xth8" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:50.409170   68418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 20:34:50.412846   68418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:34:50.423375   68418 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1001 20:34:50.423404   68418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1001 20:34:50.476160   68418 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1001 20:34:50.476192   68418 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1001 20:34:50.510810   68418 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 20:34:50.510840   68418 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1001 20:34:50.570025   68418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 20:34:50.783367   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:50.783390   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:50.783748   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Closing plugin on server side
	I1001 20:34:50.783761   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:50.783773   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:50.783786   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:50.783794   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:50.783980   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:50.783993   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:50.783999   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Closing plugin on server side
	I1001 20:34:50.795782   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:50.795802   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:50.796093   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:50.796114   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:51.424974   68418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.012087585s)
	I1001 20:34:51.425090   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:51.425107   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:51.425376   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:51.425413   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:51.425426   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:51.425440   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:51.425671   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Closing plugin on server side
	I1001 20:34:51.425723   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:51.425743   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:51.713898   68418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.143834875s)
	I1001 20:34:51.713954   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:51.713969   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:51.714336   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:51.714375   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:51.714380   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Closing plugin on server side
	I1001 20:34:51.714385   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:51.714487   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:51.714762   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:51.714779   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:51.714798   68418 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-878552"
	I1001 20:34:51.716414   68418 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1001 20:34:51.717866   68418 addons.go:510] duration metric: took 1.735348103s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1001 20:34:52.268955   68418 pod_ready.go:103] pod "coredns-7c65d6cfc9-8xth8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:54.769610   68418 pod_ready.go:93] pod "coredns-7c65d6cfc9-8xth8" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:54.769633   68418 pod_ready.go:82] duration metric: took 4.507339793s for pod "coredns-7c65d6cfc9-8xth8" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:54.769642   68418 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p7wbg" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:56.775610   68418 pod_ready.go:103] pod "coredns-7c65d6cfc9-p7wbg" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:57.777422   68418 pod_ready.go:93] pod "coredns-7c65d6cfc9-p7wbg" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:57.777445   68418 pod_ready.go:82] duration metric: took 3.007796462s for pod "coredns-7c65d6cfc9-p7wbg" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.777455   68418 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.783103   68418 pod_ready.go:93] pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:57.783124   68418 pod_ready.go:82] duration metric: took 5.664052ms for pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.783135   68418 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.788028   68418 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:57.788052   68418 pod_ready.go:82] duration metric: took 4.910566ms for pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.788064   68418 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.792321   68418 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:57.792348   68418 pod_ready.go:82] duration metric: took 4.274793ms for pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.792379   68418 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-272ln" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.797759   68418 pod_ready.go:93] pod "kube-proxy-272ln" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:57.797782   68418 pod_ready.go:82] duration metric: took 5.395909ms for pod "kube-proxy-272ln" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.797792   68418 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:58.173750   68418 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:58.173783   68418 pod_ready.go:82] duration metric: took 375.98387ms for pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:58.173793   68418 pod_ready.go:39] duration metric: took 7.917945016s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:34:58.173812   68418 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:34:58.173878   68418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:34:58.188649   68418 api_server.go:72] duration metric: took 8.206165908s to wait for apiserver process to appear ...
	I1001 20:34:58.188676   68418 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:34:58.188697   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:34:58.193752   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 200:
	ok
	I1001 20:34:58.194629   68418 api_server.go:141] control plane version: v1.31.1
	I1001 20:34:58.194646   68418 api_server.go:131] duration metric: took 5.963942ms to wait for apiserver health ...
	I1001 20:34:58.194653   68418 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:34:58.378081   68418 system_pods.go:59] 9 kube-system pods found
	I1001 20:34:58.378110   68418 system_pods.go:61] "coredns-7c65d6cfc9-8xth8" [4a6d614d-f16c-46fb-add5-610ac5895e1c] Running
	I1001 20:34:58.378115   68418 system_pods.go:61] "coredns-7c65d6cfc9-p7wbg" [13fab587-7dc4-41fc-a74c-47372725886d] Running
	I1001 20:34:58.378121   68418 system_pods.go:61] "etcd-default-k8s-diff-port-878552" [56a25509-d233-470d-888a-cf87475bf51b] Running
	I1001 20:34:58.378124   68418 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-878552" [d74bbc5a-6944-4e7b-a175-59b8ce58b359] Running
	I1001 20:34:58.378128   68418 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-878552" [5f2b8294-3146-4996-8a92-69ae08803d55] Running
	I1001 20:34:58.378131   68418 system_pods.go:61] "kube-proxy-272ln" [9f2e367f-34c7-4117-bd8e-62b5aa58c7b5] Running
	I1001 20:34:58.378134   68418 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-878552" [91e886e5-8452-4fe2-8be8-7705eeed5073] Running
	I1001 20:34:58.378140   68418 system_pods.go:61] "metrics-server-6867b74b74-75m4s" [c8eb4eb3-ea7f-4a88-8c1f-e3331f44ae53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:34:58.378143   68418 system_pods.go:61] "storage-provisioner" [bfc9ed28-f04b-4e57-b8c0-f41849e1fc25] Running
	I1001 20:34:58.378151   68418 system_pods.go:74] duration metric: took 183.491966ms to wait for pod list to return data ...
	I1001 20:34:58.378157   68418 default_sa.go:34] waiting for default service account to be created ...
	I1001 20:34:58.574257   68418 default_sa.go:45] found service account: "default"
	I1001 20:34:58.574282   68418 default_sa.go:55] duration metric: took 196.119399ms for default service account to be created ...
	I1001 20:34:58.574290   68418 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 20:34:58.776341   68418 system_pods.go:86] 9 kube-system pods found
	I1001 20:34:58.776395   68418 system_pods.go:89] "coredns-7c65d6cfc9-8xth8" [4a6d614d-f16c-46fb-add5-610ac5895e1c] Running
	I1001 20:34:58.776406   68418 system_pods.go:89] "coredns-7c65d6cfc9-p7wbg" [13fab587-7dc4-41fc-a74c-47372725886d] Running
	I1001 20:34:58.776420   68418 system_pods.go:89] "etcd-default-k8s-diff-port-878552" [56a25509-d233-470d-888a-cf87475bf51b] Running
	I1001 20:34:58.776428   68418 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-878552" [d74bbc5a-6944-4e7b-a175-59b8ce58b359] Running
	I1001 20:34:58.776438   68418 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-878552" [5f2b8294-3146-4996-8a92-69ae08803d55] Running
	I1001 20:34:58.776443   68418 system_pods.go:89] "kube-proxy-272ln" [9f2e367f-34c7-4117-bd8e-62b5aa58c7b5] Running
	I1001 20:34:58.776449   68418 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-878552" [91e886e5-8452-4fe2-8be8-7705eeed5073] Running
	I1001 20:34:58.776456   68418 system_pods.go:89] "metrics-server-6867b74b74-75m4s" [c8eb4eb3-ea7f-4a88-8c1f-e3331f44ae53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:34:58.776463   68418 system_pods.go:89] "storage-provisioner" [bfc9ed28-f04b-4e57-b8c0-f41849e1fc25] Running
	I1001 20:34:58.776471   68418 system_pods.go:126] duration metric: took 202.174994ms to wait for k8s-apps to be running ...
	I1001 20:34:58.776481   68418 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 20:34:58.776526   68418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:34:58.791729   68418 system_svc.go:56] duration metric: took 15.241394ms WaitForService to wait for kubelet
	I1001 20:34:58.791758   68418 kubeadm.go:582] duration metric: took 8.809278003s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:34:58.791774   68418 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:34:58.976076   68418 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:34:58.976102   68418 node_conditions.go:123] node cpu capacity is 2
	I1001 20:34:58.976115   68418 node_conditions.go:105] duration metric: took 184.336121ms to run NodePressure ...
	I1001 20:34:58.976127   68418 start.go:241] waiting for startup goroutines ...
	I1001 20:34:58.976136   68418 start.go:246] waiting for cluster config update ...
	I1001 20:34:58.976149   68418 start.go:255] writing updated cluster config ...
	I1001 20:34:58.976450   68418 ssh_runner.go:195] Run: rm -f paused
	I1001 20:34:59.026367   68418 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 20:34:59.029055   68418 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-878552" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.228601639Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814948228574846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74f69363-b052-4d40-aba6-2ed1fd9ea7b0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.229346029Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=42d09de0-05a9-4776-a44d-6600e064977f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.229412988Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=42d09de0-05a9-4776-a44d-6600e064977f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.229613544Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa,PodSandboxId:0a88665266cc2f08e215002897cb59c7a8c13f13b19399b21fde199aa9cd896a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814170931281786,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8832193a-39b4-49b9-b943-3241bb27fb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3981814eef46226993c1c4a4edb27e11c712d927d02d3108947611a0d4d6b389,PodSandboxId:5ffc250487ecf1179d6a16e31379ac9ab453100b694e252acc70d6597f920522,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727814151245737591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 815f5080-dfac-4639-8d4d-799975d8f0e1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6,PodSandboxId:6cd49ba952eafac891af87b63cb3223c25ee3f217375a043fafdae31bdbadb89,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814147774153625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g8jf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fbddef1-a564-4ee8-ab53-ae838d0fd984,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b,PodSandboxId:0a88665266cc2f08e215002897cb59c7a8c13f13b19399b21fde199aa9cd896a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727814140178687240,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
832193a-39b4-49b9-b943-3241bb27fb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8,PodSandboxId:dd90e7d68df5dfcc902fa373cc2aab0991248560d4a60f8989f9c31aee11c584,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727814140143266253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e25a055c-0203-4fe7-8801-560b9cdb27
bb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f,PodSandboxId:4cb8edf5989f6ac213d0b048567669885d32706e746baf9d96f03201eea66a51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727814135422490953,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9115d965fc4901e54c07a2cea5b4685d,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd,PodSandboxId:2670bc708b1763bcb44d536dc79e950560b46d7eed02b99a37f5b6fd7e6a6bc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727814135372060114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fecf9806c385262cee9f746f5ec0ae30,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d,PodSandboxId:42f37858d2731d544150520dfed9e3863f252ec6c1fc1fff71b0f33fe708d94a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814135363000750,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9039c1881a40941c5423b90636b917f0,},Annotations:map[string]string{io.kubernetes.container.hash: 12fa
acf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf,PodSandboxId:f808057f488899b902bf93f2185fcb395e721ac7fe24899f95011ae1d77f8b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727814135341544363,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e600911258e76c20c9684f3a9522644b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=42d09de0-05a9-4776-a44d-6600e064977f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.265978544Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=337cece5-38c6-4d61-9a7d-f36cb844d331 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.266056156Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=337cece5-38c6-4d61-9a7d-f36cb844d331 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.267254005Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a5e2ba17-4bc7-4751-88c4-b380e8ee60cb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.267611610Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814948267590064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5e2ba17-4bc7-4751-88c4-b380e8ee60cb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.268063475Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7572db50-e03c-4dff-bfd0-4dfabb861b8a name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.268161630Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7572db50-e03c-4dff-bfd0-4dfabb861b8a name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.268347082Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa,PodSandboxId:0a88665266cc2f08e215002897cb59c7a8c13f13b19399b21fde199aa9cd896a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814170931281786,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8832193a-39b4-49b9-b943-3241bb27fb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3981814eef46226993c1c4a4edb27e11c712d927d02d3108947611a0d4d6b389,PodSandboxId:5ffc250487ecf1179d6a16e31379ac9ab453100b694e252acc70d6597f920522,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727814151245737591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 815f5080-dfac-4639-8d4d-799975d8f0e1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6,PodSandboxId:6cd49ba952eafac891af87b63cb3223c25ee3f217375a043fafdae31bdbadb89,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814147774153625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g8jf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fbddef1-a564-4ee8-ab53-ae838d0fd984,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b,PodSandboxId:0a88665266cc2f08e215002897cb59c7a8c13f13b19399b21fde199aa9cd896a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727814140178687240,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
832193a-39b4-49b9-b943-3241bb27fb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8,PodSandboxId:dd90e7d68df5dfcc902fa373cc2aab0991248560d4a60f8989f9c31aee11c584,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727814140143266253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e25a055c-0203-4fe7-8801-560b9cdb27
bb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f,PodSandboxId:4cb8edf5989f6ac213d0b048567669885d32706e746baf9d96f03201eea66a51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727814135422490953,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9115d965fc4901e54c07a2cea5b4685d,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd,PodSandboxId:2670bc708b1763bcb44d536dc79e950560b46d7eed02b99a37f5b6fd7e6a6bc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727814135372060114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fecf9806c385262cee9f746f5ec0ae30,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d,PodSandboxId:42f37858d2731d544150520dfed9e3863f252ec6c1fc1fff71b0f33fe708d94a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814135363000750,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9039c1881a40941c5423b90636b917f0,},Annotations:map[string]string{io.kubernetes.container.hash: 12fa
acf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf,PodSandboxId:f808057f488899b902bf93f2185fcb395e721ac7fe24899f95011ae1d77f8b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727814135341544363,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e600911258e76c20c9684f3a9522644b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7572db50-e03c-4dff-bfd0-4dfabb861b8a name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.313976643Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=68b3960a-cb94-4c1f-abfd-001e4ab33813 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.314052608Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=68b3960a-cb94-4c1f-abfd-001e4ab33813 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.315071142Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4b2ae21d-2019-437b-aa41-117283dd6b3c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.315754060Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814948315719769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b2ae21d-2019-437b-aa41-117283dd6b3c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.316425973Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b1964eee-1777-422d-883a-4825acfb4d43 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.316480900Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b1964eee-1777-422d-883a-4825acfb4d43 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.316664067Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa,PodSandboxId:0a88665266cc2f08e215002897cb59c7a8c13f13b19399b21fde199aa9cd896a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814170931281786,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8832193a-39b4-49b9-b943-3241bb27fb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3981814eef46226993c1c4a4edb27e11c712d927d02d3108947611a0d4d6b389,PodSandboxId:5ffc250487ecf1179d6a16e31379ac9ab453100b694e252acc70d6597f920522,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727814151245737591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 815f5080-dfac-4639-8d4d-799975d8f0e1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6,PodSandboxId:6cd49ba952eafac891af87b63cb3223c25ee3f217375a043fafdae31bdbadb89,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814147774153625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g8jf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fbddef1-a564-4ee8-ab53-ae838d0fd984,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b,PodSandboxId:0a88665266cc2f08e215002897cb59c7a8c13f13b19399b21fde199aa9cd896a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727814140178687240,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
832193a-39b4-49b9-b943-3241bb27fb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8,PodSandboxId:dd90e7d68df5dfcc902fa373cc2aab0991248560d4a60f8989f9c31aee11c584,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727814140143266253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e25a055c-0203-4fe7-8801-560b9cdb27
bb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f,PodSandboxId:4cb8edf5989f6ac213d0b048567669885d32706e746baf9d96f03201eea66a51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727814135422490953,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9115d965fc4901e54c07a2cea5b4685d,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd,PodSandboxId:2670bc708b1763bcb44d536dc79e950560b46d7eed02b99a37f5b6fd7e6a6bc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727814135372060114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fecf9806c385262cee9f746f5ec0ae30,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d,PodSandboxId:42f37858d2731d544150520dfed9e3863f252ec6c1fc1fff71b0f33fe708d94a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814135363000750,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9039c1881a40941c5423b90636b917f0,},Annotations:map[string]string{io.kubernetes.container.hash: 12fa
acf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf,PodSandboxId:f808057f488899b902bf93f2185fcb395e721ac7fe24899f95011ae1d77f8b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727814135341544363,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e600911258e76c20c9684f3a9522644b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b1964eee-1777-422d-883a-4825acfb4d43 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.355206105Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=94757080-3bf4-47b4-bfc1-e1e7e285fc08 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.355293386Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=94757080-3bf4-47b4-bfc1-e1e7e285fc08 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.356723795Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b7f3546-965c-4eb7-a1d8-c752fe10921a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.357267791Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814948357235677,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b7f3546-965c-4eb7-a1d8-c752fe10921a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.357944708Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84ffa8ed-6897-4554-9792-302a956721ed name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.358000243Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84ffa8ed-6897-4554-9792-302a956721ed name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:35:48 no-preload-262337 crio[710]: time="2024-10-01 20:35:48.358277862Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa,PodSandboxId:0a88665266cc2f08e215002897cb59c7a8c13f13b19399b21fde199aa9cd896a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814170931281786,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8832193a-39b4-49b9-b943-3241bb27fb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3981814eef46226993c1c4a4edb27e11c712d927d02d3108947611a0d4d6b389,PodSandboxId:5ffc250487ecf1179d6a16e31379ac9ab453100b694e252acc70d6597f920522,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727814151245737591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 815f5080-dfac-4639-8d4d-799975d8f0e1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6,PodSandboxId:6cd49ba952eafac891af87b63cb3223c25ee3f217375a043fafdae31bdbadb89,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814147774153625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g8jf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fbddef1-a564-4ee8-ab53-ae838d0fd984,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b,PodSandboxId:0a88665266cc2f08e215002897cb59c7a8c13f13b19399b21fde199aa9cd896a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727814140178687240,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
832193a-39b4-49b9-b943-3241bb27fb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8,PodSandboxId:dd90e7d68df5dfcc902fa373cc2aab0991248560d4a60f8989f9c31aee11c584,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727814140143266253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e25a055c-0203-4fe7-8801-560b9cdb27
bb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f,PodSandboxId:4cb8edf5989f6ac213d0b048567669885d32706e746baf9d96f03201eea66a51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727814135422490953,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9115d965fc4901e54c07a2cea5b4685d,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd,PodSandboxId:2670bc708b1763bcb44d536dc79e950560b46d7eed02b99a37f5b6fd7e6a6bc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727814135372060114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fecf9806c385262cee9f746f5ec0ae30,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d,PodSandboxId:42f37858d2731d544150520dfed9e3863f252ec6c1fc1fff71b0f33fe708d94a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814135363000750,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9039c1881a40941c5423b90636b917f0,},Annotations:map[string]string{io.kubernetes.container.hash: 12fa
acf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf,PodSandboxId:f808057f488899b902bf93f2185fcb395e721ac7fe24899f95011ae1d77f8b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727814135341544363,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e600911258e76c20c9684f3a9522644b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84ffa8ed-6897-4554-9792-302a956721ed name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a5ae72bcebfe4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   0a88665266cc2       storage-provisioner
	3981814eef462       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   5ffc250487ecf       busybox
	4380c36f31b67       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   6cd49ba952eaf       coredns-7c65d6cfc9-g8jf8
	652cab583d763       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   0a88665266cc2       storage-provisioner
	fc3552d19417a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   dd90e7d68df5d       kube-proxy-7rrkn
	586d6feee0436       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   4cb8edf5989f6       etcd-no-preload-262337
	a64415a2dee8b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   2670bc708b176       kube-apiserver-no-preload-262337
	89f0e3dd97e8a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   42f37858d2731       kube-scheduler-no-preload-262337
	69adf90addf5f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   f808057f48889       kube-controller-manager-no-preload-262337
	
	
	==> coredns [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44278 - 21795 "HINFO IN 2150363184310238732.2692046288970790068. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013592862s
	
	
	==> describe nodes <==
	Name:               no-preload-262337
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-262337
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=no-preload-262337
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T20_12_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 20:12:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-262337
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 20:35:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 20:33:02 +0000   Tue, 01 Oct 2024 20:12:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 20:33:02 +0000   Tue, 01 Oct 2024 20:12:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 20:33:02 +0000   Tue, 01 Oct 2024 20:12:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 20:33:02 +0000   Tue, 01 Oct 2024 20:22:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.93
	  Hostname:    no-preload-262337
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2b245fbd1b7b4233923322e30b8c6875
	  System UUID:                2b245fbd-1b7b-4233-9233-22e30b8c6875
	  Boot ID:                    1550a445-c7c9-4305-8e82-dff1255f4b52
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-7c65d6cfc9-g8jf8                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-no-preload-262337                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-no-preload-262337             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-no-preload-262337    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-7rrkn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-no-preload-262337             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-6867b74b74-2rpwt              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node no-preload-262337 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node no-preload-262337 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node no-preload-262337 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-262337 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-262337 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-262337 status is now: NodeHasSufficientPID
	  Normal  NodeReady                22m                kubelet          Node no-preload-262337 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-262337 event: Registered Node no-preload-262337 in Controller
	  Normal  CIDRAssignmentFailed     22m                cidrAllocator    Node no-preload-262337 status is now: CIDRAssignmentFailed
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-262337 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-262337 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-262337 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-262337 event: Registered Node no-preload-262337 in Controller
	
	
	==> dmesg <==
	[Oct 1 20:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054207] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040966] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.074423] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.998053] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.538389] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.802449] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.059617] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065206] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.169845] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.122284] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.287549] systemd-fstab-generator[702]: Ignoring "noauto" option for root device
	[Oct 1 20:22] systemd-fstab-generator[1241]: Ignoring "noauto" option for root device
	[  +0.059109] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.018009] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +3.333709] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.257438] systemd-fstab-generator[1982]: Ignoring "noauto" option for root device
	[  +3.711729] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.521799] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f] <==
	{"level":"info","ts":"2024-10-01T20:22:15.933732Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T20:22:15.943033Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-01T20:22:15.955512Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"4e6e2c9029caadaa","initial-advertise-peer-urls":["https://192.168.61.93:2380"],"listen-peer-urls":["https://192.168.61.93:2380"],"advertise-client-urls":["https://192.168.61.93:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.93:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-01T20:22:15.955562Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-01T20:22:15.955193Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.93:2380"}
	{"level":"info","ts":"2024-10-01T20:22:15.955592Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.93:2380"}
	{"level":"info","ts":"2024-10-01T20:22:17.465605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6e2c9029caadaa is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-01T20:22:17.465707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6e2c9029caadaa became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-01T20:22:17.465753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6e2c9029caadaa received MsgPreVoteResp from 4e6e2c9029caadaa at term 2"}
	{"level":"info","ts":"2024-10-01T20:22:17.465782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6e2c9029caadaa became candidate at term 3"}
	{"level":"info","ts":"2024-10-01T20:22:17.465807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6e2c9029caadaa received MsgVoteResp from 4e6e2c9029caadaa at term 3"}
	{"level":"info","ts":"2024-10-01T20:22:17.465841Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6e2c9029caadaa became leader at term 3"}
	{"level":"info","ts":"2024-10-01T20:22:17.465866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4e6e2c9029caadaa elected leader 4e6e2c9029caadaa at term 3"}
	{"level":"info","ts":"2024-10-01T20:22:17.467383Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4e6e2c9029caadaa","local-member-attributes":"{Name:no-preload-262337 ClientURLs:[https://192.168.61.93:2379]}","request-path":"/0/members/4e6e2c9029caadaa/attributes","cluster-id":"4a4285095021b5a3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-01T20:22:17.467443Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T20:22:17.467623Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T20:22:17.468005Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-01T20:22:17.468034Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-01T20:22:17.468737Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T20:22:17.469472Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T20:22:17.469549Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.93:2379"}
	{"level":"info","ts":"2024-10-01T20:22:17.470337Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-01T20:32:17.499820Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":865}
	{"level":"info","ts":"2024-10-01T20:32:17.509482Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":865,"took":"9.279816ms","hash":505999604,"current-db-size-bytes":2740224,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2740224,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-10-01T20:32:17.509540Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":505999604,"revision":865,"compact-revision":-1}
	
	
	==> kernel <==
	 20:35:48 up 14 min,  0 users,  load average: 0.25, 0.21, 0.18
	Linux no-preload-262337 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1001 20:32:19.766745       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:32:19.767038       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1001 20:32:19.768191       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1001 20:32:19.768266       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1001 20:33:19.768916       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:33:19.769167       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1001 20:33:19.769234       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:33:19.769259       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1001 20:33:19.770399       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1001 20:33:19.770442       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1001 20:35:19.770633       1 handler_proxy.go:99] no RequestInfo found in the context
	W1001 20:35:19.770633       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:35:19.771212       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1001 20:35:19.771214       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1001 20:35:19.772376       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1001 20:35:19.772438       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf] <==
	E1001 20:30:22.354937       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:30:22.944313       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:30:52.362027       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:30:52.957656       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:31:22.369831       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:31:22.965587       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:31:52.375251       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:31:52.973564       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:32:22.382458       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:32:22.981009       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:32:52.388891       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:32:52.994640       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1001 20:33:02.143470       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-262337"
	E1001 20:33:22.395895       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:33:23.004261       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1001 20:33:32.759261       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="377.861µs"
	I1001 20:33:46.755311       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="114.495µs"
	E1001 20:33:52.401355       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:33:53.011849       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:34:22.408245       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:34:23.019174       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:34:52.414935       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:34:53.041833       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:35:22.421829       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:35:23.052017       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 20:22:20.521360       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 20:22:20.550791       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.93"]
	E1001 20:22:20.550991       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 20:22:20.619277       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 20:22:20.619371       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 20:22:20.619410       1 server_linux.go:169] "Using iptables Proxier"
	I1001 20:22:20.622114       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 20:22:20.623219       1 server.go:483] "Version info" version="v1.31.1"
	I1001 20:22:20.623289       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 20:22:20.626084       1 config.go:199] "Starting service config controller"
	I1001 20:22:20.626763       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 20:22:20.627025       1 config.go:105] "Starting endpoint slice config controller"
	I1001 20:22:20.627071       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 20:22:20.628413       1 config.go:328] "Starting node config controller"
	I1001 20:22:20.628443       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 20:22:20.728188       1 shared_informer.go:320] Caches are synced for service config
	I1001 20:22:20.728312       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 20:22:20.728549       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d] <==
	I1001 20:22:16.509384       1 serving.go:386] Generated self-signed cert in-memory
	W1001 20:22:18.734552       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1001 20:22:18.734645       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1001 20:22:18.734674       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1001 20:22:18.734698       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1001 20:22:18.782848       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1001 20:22:18.783224       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 20:22:18.792980       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 20:22:18.793078       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1001 20:22:18.793099       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1001 20:22:18.793248       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1001 20:22:18.894551       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 01 20:34:35 no-preload-262337 kubelet[1371]: E1001 20:34:35.742234    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2rpwt" podUID="235515ab-28fc-437b-983a-243f7a8fb183"
	Oct 01 20:34:44 no-preload-262337 kubelet[1371]: E1001 20:34:44.959260    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814884958831087,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:34:44 no-preload-262337 kubelet[1371]: E1001 20:34:44.959308    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814884958831087,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:34:49 no-preload-262337 kubelet[1371]: E1001 20:34:49.741629    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2rpwt" podUID="235515ab-28fc-437b-983a-243f7a8fb183"
	Oct 01 20:34:54 no-preload-262337 kubelet[1371]: E1001 20:34:54.961630    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814894961257763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:34:54 no-preload-262337 kubelet[1371]: E1001 20:34:54.961947    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814894961257763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:35:04 no-preload-262337 kubelet[1371]: E1001 20:35:04.743742    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2rpwt" podUID="235515ab-28fc-437b-983a-243f7a8fb183"
	Oct 01 20:35:04 no-preload-262337 kubelet[1371]: E1001 20:35:04.964326    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814904963739567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:35:04 no-preload-262337 kubelet[1371]: E1001 20:35:04.964436    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814904963739567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:35:14 no-preload-262337 kubelet[1371]: E1001 20:35:14.760590    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 20:35:14 no-preload-262337 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 20:35:14 no-preload-262337 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 20:35:14 no-preload-262337 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 20:35:14 no-preload-262337 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 20:35:14 no-preload-262337 kubelet[1371]: E1001 20:35:14.966234    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814914965659732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:35:14 no-preload-262337 kubelet[1371]: E1001 20:35:14.966301    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814914965659732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:35:17 no-preload-262337 kubelet[1371]: E1001 20:35:17.742100    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2rpwt" podUID="235515ab-28fc-437b-983a-243f7a8fb183"
	Oct 01 20:35:24 no-preload-262337 kubelet[1371]: E1001 20:35:24.968037    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814924967700258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:35:24 no-preload-262337 kubelet[1371]: E1001 20:35:24.968068    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814924967700258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:35:31 no-preload-262337 kubelet[1371]: E1001 20:35:31.741927    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2rpwt" podUID="235515ab-28fc-437b-983a-243f7a8fb183"
	Oct 01 20:35:34 no-preload-262337 kubelet[1371]: E1001 20:35:34.969846    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814934969491713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:35:34 no-preload-262337 kubelet[1371]: E1001 20:35:34.970278    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814934969491713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:35:44 no-preload-262337 kubelet[1371]: E1001 20:35:44.972675    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814944972324577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:35:44 no-preload-262337 kubelet[1371]: E1001 20:35:44.973017    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727814944972324577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:35:45 no-preload-262337 kubelet[1371]: E1001 20:35:45.742105    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2rpwt" podUID="235515ab-28fc-437b-983a-243f7a8fb183"
	
	
	==> storage-provisioner [652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b] <==
	I1001 20:22:20.317219       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1001 20:22:50.319942       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa] <==
	I1001 20:22:51.007033       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 20:22:51.019717       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 20:22:51.019797       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 20:23:08.420447       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 20:23:08.420680       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-262337_51785853-c7f5-43d5-a7af-4fd5eb81ccb8!
	I1001 20:23:08.422063       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"836cd95c-e80f-446d-a21e-bcc0177b8324", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-262337_51785853-c7f5-43d5-a7af-4fd5eb81ccb8 became leader
	I1001 20:23:08.521800       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-262337_51785853-c7f5-43d5-a7af-4fd5eb81ccb8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-262337 -n no-preload-262337
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-262337 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-2rpwt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-262337 describe pod metrics-server-6867b74b74-2rpwt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-262337 describe pod metrics-server-6867b74b74-2rpwt: exit status 1 (61.205782ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-2rpwt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-262337 describe pod metrics-server-6867b74b74-2rpwt: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
E1001 20:31:34.840045   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
E1001 20:31:59.024809   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
E1001 20:36:34.840019   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
E1001 20:36:59.024549   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-359369 -n old-k8s-version-359369
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-359369 -n old-k8s-version-359369: exit status 2 (231.174628ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-359369" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-359369 -n old-k8s-version-359369
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-359369 -n old-k8s-version-359369: exit status 2 (224.02566ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-359369 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-402897                              | cert-expiration-402897       | jenkins | v1.34.0 | 01 Oct 24 20:12 UTC | 01 Oct 24 20:12 UTC |
	| start   | -p embed-certs-106982                                  | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:12 UTC | 01 Oct 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-262337             | no-preload-262337            | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC | 01 Oct 24 20:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-262337                                   | no-preload-262337            | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-106982            | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC | 01 Oct 24 20:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-106982                                  | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:14 UTC | 01 Oct 24 20:14 UTC |
	| start   | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:14 UTC | 01 Oct 24 20:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC | 01 Oct 24 20:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-359369        | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-262337                  | no-preload-262337            | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-262337                                   | no-preload-262337            | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC | 01 Oct 24 20:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-106982                 | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-556200 | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:16 UTC |
	|         | disable-driver-mounts-556200                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:21 UTC |
	|         | default-k8s-diff-port-878552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-106982                                  | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-359369                              | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:17 UTC | 01 Oct 24 20:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-359369             | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:17 UTC | 01 Oct 24 20:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-359369                              | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-878552  | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:22 UTC | 01 Oct 24 20:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:22 UTC |                     |
	|         | default-k8s-diff-port-878552                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-878552       | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:24 UTC | 01 Oct 24 20:34 UTC |
	|         | default-k8s-diff-port-878552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 20:24:40
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 20:24:40.832961   68418 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:24:40.833061   68418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:24:40.833066   68418 out.go:358] Setting ErrFile to fd 2...
	I1001 20:24:40.833070   68418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:24:40.833265   68418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 20:24:40.833818   68418 out.go:352] Setting JSON to false
	I1001 20:24:40.834796   68418 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7623,"bootTime":1727806658,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 20:24:40.834894   68418 start.go:139] virtualization: kvm guest
	I1001 20:24:40.837148   68418 out.go:177] * [default-k8s-diff-port-878552] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 20:24:40.838511   68418 notify.go:220] Checking for updates...
	I1001 20:24:40.838551   68418 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 20:24:40.839938   68418 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 20:24:40.841161   68418 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:24:40.842268   68418 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:24:40.843373   68418 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 20:24:40.844538   68418 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 20:24:40.846141   68418 config.go:182] Loaded profile config "default-k8s-diff-port-878552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:24:40.846513   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:24:40.846561   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:24:40.862168   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42661
	I1001 20:24:40.862628   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:24:40.863294   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:24:40.863326   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:24:40.863699   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:24:40.863903   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:24:40.864180   68418 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 20:24:40.864548   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:24:40.864620   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:24:40.880173   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45711
	I1001 20:24:40.880719   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:24:40.881220   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:24:40.881245   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:24:40.881581   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:24:40.881795   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:24:40.920802   68418 out.go:177] * Using the kvm2 driver based on existing profile
	I1001 20:24:40.921986   68418 start.go:297] selected driver: kvm2
	I1001 20:24:40.921999   68418 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-878552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-878552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.4 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:24:40.922122   68418 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 20:24:40.922802   68418 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:24:40.922895   68418 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 20:24:40.938386   68418 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 20:24:40.938811   68418 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:24:40.938841   68418 cni.go:84] Creating CNI manager for ""
	I1001 20:24:40.938880   68418 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:24:40.938931   68418 start.go:340] cluster config:
	{Name:default-k8s-diff-port-878552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-878552 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.4 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:24:40.939036   68418 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:24:40.940656   68418 out.go:177] * Starting "default-k8s-diff-port-878552" primary control-plane node in "default-k8s-diff-port-878552" cluster
	I1001 20:24:40.941946   68418 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 20:24:40.942006   68418 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 20:24:40.942023   68418 cache.go:56] Caching tarball of preloaded images
	I1001 20:24:40.942155   68418 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 20:24:40.942166   68418 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 20:24:40.942298   68418 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/config.json ...
	I1001 20:24:40.942537   68418 start.go:360] acquireMachinesLock for default-k8s-diff-port-878552: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 20:24:40.942581   68418 start.go:364] duration metric: took 24.859µs to acquireMachinesLock for "default-k8s-diff-port-878552"
	I1001 20:24:40.942601   68418 start.go:96] Skipping create...Using existing machine configuration
	I1001 20:24:40.942608   68418 fix.go:54] fixHost starting: 
	I1001 20:24:40.942921   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:24:40.942954   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:24:40.958447   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36711
	I1001 20:24:40.958976   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:24:40.960190   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:24:40.960223   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:24:40.960575   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:24:40.960770   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:24:40.960921   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:24:40.962765   68418 fix.go:112] recreateIfNeeded on default-k8s-diff-port-878552: state=Running err=<nil>
	W1001 20:24:40.962786   68418 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 20:24:40.964520   68418 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-878552" VM ...
	I1001 20:24:37.763268   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:40.262669   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:39.025570   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:39.040932   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:39.041011   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:39.076620   65592 cri.go:89] found id: ""
	I1001 20:24:39.076649   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.076659   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:39.076666   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:39.076734   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:39.113395   65592 cri.go:89] found id: ""
	I1001 20:24:39.113422   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.113430   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:39.113436   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:39.113490   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:39.147839   65592 cri.go:89] found id: ""
	I1001 20:24:39.147877   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.147890   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:39.147899   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:39.147966   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:39.179721   65592 cri.go:89] found id: ""
	I1001 20:24:39.179758   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.179769   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:39.179777   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:39.179842   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:39.211511   65592 cri.go:89] found id: ""
	I1001 20:24:39.211541   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.211549   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:39.211554   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:39.211603   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:39.243517   65592 cri.go:89] found id: ""
	I1001 20:24:39.243544   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.243552   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:39.243557   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:39.243623   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:39.276159   65592 cri.go:89] found id: ""
	I1001 20:24:39.276182   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.276189   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:39.276195   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:39.276239   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:39.307242   65592 cri.go:89] found id: ""
	I1001 20:24:39.307274   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.307285   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:39.307295   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:39.307307   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:39.387442   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:39.387486   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:39.423123   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:39.423156   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:39.474648   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:39.474686   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:39.488129   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:39.488158   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:39.557478   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:42.058114   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:42.071979   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:42.072056   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:42.110529   65592 cri.go:89] found id: ""
	I1001 20:24:42.110557   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.110565   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:42.110570   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:42.110619   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:42.145408   65592 cri.go:89] found id: ""
	I1001 20:24:42.145436   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.145445   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:42.145450   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:42.145509   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:42.180602   65592 cri.go:89] found id: ""
	I1001 20:24:42.180641   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.180655   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:42.180664   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:42.180722   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:38.119187   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:40.619080   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:40.965599   68418 machine.go:93] provisionDockerMachine start ...
	I1001 20:24:40.965619   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:24:40.965852   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:24:40.968710   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:24:40.969253   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:20:43 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:24:40.969286   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:24:40.969517   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:24:40.969724   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:24:40.969960   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:24:40.970112   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:24:40.970316   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:24:40.970570   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:24:40.970584   68418 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 20:24:43.860755   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:24:42.262933   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:44.762857   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:42.214116   65592 cri.go:89] found id: ""
	I1001 20:24:42.214148   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.214160   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:42.214168   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:42.214224   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:42.246785   65592 cri.go:89] found id: ""
	I1001 20:24:42.246814   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.246825   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:42.246832   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:42.246900   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:42.281586   65592 cri.go:89] found id: ""
	I1001 20:24:42.281633   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.281645   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:42.281660   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:42.281724   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:42.318982   65592 cri.go:89] found id: ""
	I1001 20:24:42.319015   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.319025   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:42.319032   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:42.319085   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:42.350592   65592 cri.go:89] found id: ""
	I1001 20:24:42.350619   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.350638   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:42.350646   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:42.350659   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:42.429111   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:42.429152   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:42.466741   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:42.466775   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:42.516829   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:42.516870   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:42.530174   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:42.530201   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:42.600444   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:45.101469   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:45.113821   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:45.113904   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:45.148105   65592 cri.go:89] found id: ""
	I1001 20:24:45.148132   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.148146   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:45.148152   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:45.148196   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:45.180980   65592 cri.go:89] found id: ""
	I1001 20:24:45.181012   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.181027   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:45.181046   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:45.181113   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:45.216971   65592 cri.go:89] found id: ""
	I1001 20:24:45.217001   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.217010   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:45.217015   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:45.217060   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:45.252240   65592 cri.go:89] found id: ""
	I1001 20:24:45.252275   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.252287   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:45.252294   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:45.252354   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:45.287389   65592 cri.go:89] found id: ""
	I1001 20:24:45.287419   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.287434   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:45.287440   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:45.287501   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:45.319980   65592 cri.go:89] found id: ""
	I1001 20:24:45.320015   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.320027   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:45.320035   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:45.320101   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:45.351894   65592 cri.go:89] found id: ""
	I1001 20:24:45.351920   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.351931   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:45.351936   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:45.351984   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:45.385370   65592 cri.go:89] found id: ""
	I1001 20:24:45.385400   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.385412   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:45.385423   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:45.385485   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:45.449558   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:45.449584   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:45.449596   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:45.524322   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:45.524372   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:45.560729   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:45.560757   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:45.614098   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:45.614139   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:43.119614   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:45.121666   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:47.618362   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:46.932587   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:24:47.263384   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:49.761472   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:48.129944   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:48.143420   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:48.143496   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:48.175627   65592 cri.go:89] found id: ""
	I1001 20:24:48.175668   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.175682   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:48.175689   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:48.175747   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:48.210422   65592 cri.go:89] found id: ""
	I1001 20:24:48.210451   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.210462   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:48.210470   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:48.210535   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:48.243916   65592 cri.go:89] found id: ""
	I1001 20:24:48.243952   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.243963   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:48.243972   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:48.244027   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:48.275802   65592 cri.go:89] found id: ""
	I1001 20:24:48.275830   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.275845   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:48.275857   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:48.275917   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:48.311539   65592 cri.go:89] found id: ""
	I1001 20:24:48.311569   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.311579   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:48.311586   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:48.311648   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:48.342606   65592 cri.go:89] found id: ""
	I1001 20:24:48.342646   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.342658   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:48.342666   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:48.342718   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:48.375554   65592 cri.go:89] found id: ""
	I1001 20:24:48.375581   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.375591   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:48.375597   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:48.375642   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:48.407747   65592 cri.go:89] found id: ""
	I1001 20:24:48.407776   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.407789   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:48.407800   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:48.407814   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:48.457470   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:48.457503   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:48.470483   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:48.470517   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:48.533536   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:48.533565   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:48.533580   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:48.614530   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:48.614571   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:51.157091   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:51.170292   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:51.170364   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:51.203784   65592 cri.go:89] found id: ""
	I1001 20:24:51.203809   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.203822   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:51.203828   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:51.203917   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:51.239789   65592 cri.go:89] found id: ""
	I1001 20:24:51.239826   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.239834   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:51.239840   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:51.239889   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:51.274562   65592 cri.go:89] found id: ""
	I1001 20:24:51.274595   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.274607   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:51.274617   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:51.274701   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:51.306172   65592 cri.go:89] found id: ""
	I1001 20:24:51.306199   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.306207   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:51.306213   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:51.306269   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:51.339631   65592 cri.go:89] found id: ""
	I1001 20:24:51.339660   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.339668   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:51.339674   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:51.339725   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:51.372128   65592 cri.go:89] found id: ""
	I1001 20:24:51.372154   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.372163   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:51.372169   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:51.372223   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:51.403790   65592 cri.go:89] found id: ""
	I1001 20:24:51.403818   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.403828   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:51.403842   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:51.403890   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:51.437771   65592 cri.go:89] found id: ""
	I1001 20:24:51.437799   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.437808   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:51.437816   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:51.437827   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:51.489824   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:51.489864   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:51.503478   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:51.503508   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:51.573741   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:51.573768   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:51.573780   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:51.662355   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:51.662391   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:49.618685   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:51.619186   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:53.012639   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:24:51.761853   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:53.762442   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:56.261818   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:54.199747   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:54.212731   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:54.212797   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:54.244554   65592 cri.go:89] found id: ""
	I1001 20:24:54.244586   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.244596   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:54.244602   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:54.244652   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:54.280636   65592 cri.go:89] found id: ""
	I1001 20:24:54.280667   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.280679   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:54.280686   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:54.280737   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:54.318213   65592 cri.go:89] found id: ""
	I1001 20:24:54.318246   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.318257   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:54.318265   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:54.318321   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:54.353563   65592 cri.go:89] found id: ""
	I1001 20:24:54.353595   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.353606   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:54.353615   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:54.353678   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:54.387770   65592 cri.go:89] found id: ""
	I1001 20:24:54.387795   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.387803   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:54.387809   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:54.387869   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:54.421289   65592 cri.go:89] found id: ""
	I1001 20:24:54.421317   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.421325   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:54.421332   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:54.421382   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:54.456221   65592 cri.go:89] found id: ""
	I1001 20:24:54.456261   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.456274   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:54.456282   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:54.456348   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:54.488174   65592 cri.go:89] found id: ""
	I1001 20:24:54.488208   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.488219   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:54.488228   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:54.488241   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:54.540981   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:54.541020   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:54.554099   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:54.554129   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:54.623978   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:54.624013   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:54.624034   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:54.704703   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:54.704738   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:54.119129   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:56.619282   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:56.088698   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:24:58.262173   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:00.761865   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:57.241791   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:57.254771   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:57.254843   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:57.290226   65592 cri.go:89] found id: ""
	I1001 20:24:57.290263   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.290271   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:57.290277   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:57.290336   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:57.324910   65592 cri.go:89] found id: ""
	I1001 20:24:57.324938   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.324946   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:57.324951   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:57.325068   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:57.360553   65592 cri.go:89] found id: ""
	I1001 20:24:57.360586   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.360601   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:57.360608   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:57.360669   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:57.395182   65592 cri.go:89] found id: ""
	I1001 20:24:57.395216   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.395229   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:57.395236   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:57.395296   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:57.428967   65592 cri.go:89] found id: ""
	I1001 20:24:57.428998   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.429011   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:57.429017   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:57.429072   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:57.462483   65592 cri.go:89] found id: ""
	I1001 20:24:57.462511   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.462519   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:57.462525   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:57.462581   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:57.495505   65592 cri.go:89] found id: ""
	I1001 20:24:57.495538   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.495550   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:57.495556   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:57.495615   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:57.528132   65592 cri.go:89] found id: ""
	I1001 20:24:57.528164   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.528176   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:57.528188   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:57.528203   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:57.596557   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:57.596583   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:57.596598   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:57.676797   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:57.676830   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:57.714624   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:57.714653   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:57.763801   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:57.763839   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:00.277808   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:00.291432   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:00.291489   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:00.327524   65592 cri.go:89] found id: ""
	I1001 20:25:00.327554   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.327562   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:00.327568   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:00.327618   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:00.364125   65592 cri.go:89] found id: ""
	I1001 20:25:00.364153   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.364162   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:00.364167   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:00.364229   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:00.404507   65592 cri.go:89] found id: ""
	I1001 20:25:00.404543   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.404555   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:00.404564   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:00.404770   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:00.438761   65592 cri.go:89] found id: ""
	I1001 20:25:00.438792   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.438800   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:00.438807   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:00.438862   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:00.473263   65592 cri.go:89] found id: ""
	I1001 20:25:00.473301   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.473313   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:00.473321   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:00.473391   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:00.510276   65592 cri.go:89] found id: ""
	I1001 20:25:00.510307   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.510317   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:00.510324   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:00.510383   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:00.545118   65592 cri.go:89] found id: ""
	I1001 20:25:00.545149   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.545165   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:00.545173   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:00.545229   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:00.577773   65592 cri.go:89] found id: ""
	I1001 20:25:00.577799   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.577810   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:00.577821   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:00.577835   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:00.628978   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:00.629012   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:00.642192   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:00.642225   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:00.711399   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:00.711432   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:00.711446   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:00.792477   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:00.792514   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:59.118041   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:01.119565   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:02.164636   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:05.236638   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:02.762323   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:04.764910   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:03.332492   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:03.347542   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:03.347622   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:03.388263   65592 cri.go:89] found id: ""
	I1001 20:25:03.388292   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.388300   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:03.388306   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:03.388353   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:03.421489   65592 cri.go:89] found id: ""
	I1001 20:25:03.421525   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.421534   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:03.421539   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:03.421634   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:03.457139   65592 cri.go:89] found id: ""
	I1001 20:25:03.457172   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.457182   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:03.457189   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:03.457251   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:03.497203   65592 cri.go:89] found id: ""
	I1001 20:25:03.497232   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.497241   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:03.497247   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:03.497313   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:03.535137   65592 cri.go:89] found id: ""
	I1001 20:25:03.535163   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.535171   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:03.535176   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:03.535221   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:03.569131   65592 cri.go:89] found id: ""
	I1001 20:25:03.569158   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.569166   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:03.569171   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:03.569217   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:03.605289   65592 cri.go:89] found id: ""
	I1001 20:25:03.605321   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.605329   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:03.605336   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:03.605389   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:03.651086   65592 cri.go:89] found id: ""
	I1001 20:25:03.651115   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.651123   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:03.651134   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:03.651145   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:03.731256   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:03.731281   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:03.731299   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:03.809393   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:03.809442   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:03.849171   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:03.849198   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:03.898009   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:03.898045   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:06.411962   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:06.425432   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:06.425513   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:06.463339   65592 cri.go:89] found id: ""
	I1001 20:25:06.463371   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.463383   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:06.463391   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:06.463455   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:06.502527   65592 cri.go:89] found id: ""
	I1001 20:25:06.502561   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.502569   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:06.502611   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:06.502687   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:06.547428   65592 cri.go:89] found id: ""
	I1001 20:25:06.547465   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.547474   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:06.547480   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:06.547539   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:06.581672   65592 cri.go:89] found id: ""
	I1001 20:25:06.581699   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.581708   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:06.581713   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:06.581769   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:06.615391   65592 cri.go:89] found id: ""
	I1001 20:25:06.615436   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.615449   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:06.615457   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:06.615525   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:06.651019   65592 cri.go:89] found id: ""
	I1001 20:25:06.651050   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.651060   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:06.651067   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:06.651142   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:06.687887   65592 cri.go:89] found id: ""
	I1001 20:25:06.687912   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.687922   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:06.687929   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:06.687982   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:06.729234   65592 cri.go:89] found id: ""
	I1001 20:25:06.729263   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.729273   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:06.729282   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:06.729296   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:06.747295   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:06.747326   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:06.816480   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:06.816511   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:06.816524   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:06.896918   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:06.896957   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:06.938922   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:06.938958   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:03.619205   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:06.118575   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:06.765214   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:09.261806   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:11.262162   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:09.494252   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:09.508085   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:09.508171   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:09.542999   65592 cri.go:89] found id: ""
	I1001 20:25:09.543029   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.543037   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:09.543043   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:09.543100   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:09.578112   65592 cri.go:89] found id: ""
	I1001 20:25:09.578137   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.578145   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:09.578150   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:09.578199   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:09.613123   65592 cri.go:89] found id: ""
	I1001 20:25:09.613150   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.613158   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:09.613166   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:09.613223   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:09.648172   65592 cri.go:89] found id: ""
	I1001 20:25:09.648214   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.648223   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:09.648230   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:09.648302   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:09.681217   65592 cri.go:89] found id: ""
	I1001 20:25:09.681244   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.681254   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:09.681261   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:09.681320   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:09.718166   65592 cri.go:89] found id: ""
	I1001 20:25:09.718196   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.718204   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:09.718212   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:09.718272   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:09.751910   65592 cri.go:89] found id: ""
	I1001 20:25:09.751942   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.751951   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:09.751956   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:09.752004   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:09.789213   65592 cri.go:89] found id: ""
	I1001 20:25:09.789237   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.789246   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:09.789254   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:09.789265   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:09.826746   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:09.826780   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:09.879079   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:09.879123   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:09.892480   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:09.892507   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:09.967048   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:09.967084   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:09.967103   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:08.118822   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:10.120018   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:12.620582   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:14.356624   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:13.262286   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:15.263349   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:12.545057   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:12.557888   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:12.557969   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:12.594881   65592 cri.go:89] found id: ""
	I1001 20:25:12.594928   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.594942   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:12.594952   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:12.595021   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:12.631393   65592 cri.go:89] found id: ""
	I1001 20:25:12.631425   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.631437   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:12.631445   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:12.631504   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:12.666442   65592 cri.go:89] found id: ""
	I1001 20:25:12.666476   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.666486   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:12.666493   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:12.666548   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:12.703321   65592 cri.go:89] found id: ""
	I1001 20:25:12.703359   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.703371   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:12.703379   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:12.703444   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:12.742188   65592 cri.go:89] found id: ""
	I1001 20:25:12.742216   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.742224   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:12.742230   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:12.742276   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:12.781829   65592 cri.go:89] found id: ""
	I1001 20:25:12.781859   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.781869   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:12.781876   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:12.781940   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:12.815368   65592 cri.go:89] found id: ""
	I1001 20:25:12.815397   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.815405   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:12.815411   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:12.815463   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:12.850913   65592 cri.go:89] found id: ""
	I1001 20:25:12.850941   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.850949   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:12.850958   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:12.850968   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:12.901409   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:12.901443   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:12.914517   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:12.914567   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:12.980086   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:12.980119   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:12.980135   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:13.055950   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:13.055989   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:15.595692   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:15.609648   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:15.609728   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:15.645477   65592 cri.go:89] found id: ""
	I1001 20:25:15.645502   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.645510   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:15.645514   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:15.645558   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:15.679674   65592 cri.go:89] found id: ""
	I1001 20:25:15.679702   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.679711   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:15.679717   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:15.679774   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:15.718057   65592 cri.go:89] found id: ""
	I1001 20:25:15.718082   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.718092   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:15.718097   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:15.718153   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:15.754094   65592 cri.go:89] found id: ""
	I1001 20:25:15.754121   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.754130   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:15.754136   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:15.754189   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:15.790415   65592 cri.go:89] found id: ""
	I1001 20:25:15.790450   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.790464   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:15.790472   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:15.790535   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:15.825603   65592 cri.go:89] found id: ""
	I1001 20:25:15.825630   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.825645   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:15.825653   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:15.825717   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:15.861330   65592 cri.go:89] found id: ""
	I1001 20:25:15.861356   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.861368   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:15.861375   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:15.861451   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:15.897534   65592 cri.go:89] found id: ""
	I1001 20:25:15.897564   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.897575   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:15.897584   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:15.897598   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:15.972842   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:15.972881   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:16.010625   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:16.010653   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:16.062717   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:16.062762   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:16.076538   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:16.076568   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:16.156886   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:15.118878   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:17.119791   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:17.428649   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:17.764089   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:20.261752   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:18.657436   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:18.673018   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:18.673093   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:18.708040   65592 cri.go:89] found id: ""
	I1001 20:25:18.708078   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.708091   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:18.708100   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:18.708167   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:18.740152   65592 cri.go:89] found id: ""
	I1001 20:25:18.740188   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.740200   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:18.740207   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:18.740264   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:18.778238   65592 cri.go:89] found id: ""
	I1001 20:25:18.778270   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.778279   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:18.778287   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:18.778351   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:18.815450   65592 cri.go:89] found id: ""
	I1001 20:25:18.815489   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.815503   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:18.815512   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:18.815576   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:18.850008   65592 cri.go:89] found id: ""
	I1001 20:25:18.850038   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.850047   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:18.850053   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:18.850104   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:18.890919   65592 cri.go:89] found id: ""
	I1001 20:25:18.890943   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.890951   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:18.890957   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:18.891004   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:18.934196   65592 cri.go:89] found id: ""
	I1001 20:25:18.934228   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.934240   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:18.934247   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:18.934307   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:18.977817   65592 cri.go:89] found id: ""
	I1001 20:25:18.977850   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.977862   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:18.977875   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:18.977889   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:19.039867   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:19.039910   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:19.054277   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:19.054310   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:19.125736   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:19.125765   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:19.125782   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:19.208588   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:19.208622   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:21.750881   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:21.766638   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:21.766712   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:21.801906   65592 cri.go:89] found id: ""
	I1001 20:25:21.801930   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.801938   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:21.801944   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:21.801990   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:21.842801   65592 cri.go:89] found id: ""
	I1001 20:25:21.842830   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.842844   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:21.842852   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:21.842917   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:21.876550   65592 cri.go:89] found id: ""
	I1001 20:25:21.876577   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.876588   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:21.876594   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:21.876647   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:21.910972   65592 cri.go:89] found id: ""
	I1001 20:25:21.911007   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.911016   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:21.911022   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:21.911098   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:21.945721   65592 cri.go:89] found id: ""
	I1001 20:25:21.945753   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.945765   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:21.945773   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:21.945833   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:21.982101   65592 cri.go:89] found id: ""
	I1001 20:25:21.982131   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.982143   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:21.982151   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:21.982242   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:22.016526   65592 cri.go:89] found id: ""
	I1001 20:25:22.016558   65592 logs.go:276] 0 containers: []
	W1001 20:25:22.016569   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:22.016577   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:22.016632   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:22.054792   65592 cri.go:89] found id: ""
	I1001 20:25:22.054822   65592 logs.go:276] 0 containers: []
	W1001 20:25:22.054833   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:22.054844   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:22.054863   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:22.105936   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:22.105974   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:22.120834   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:22.120858   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:22.195177   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:22.195211   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:22.195228   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:19.120304   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:21.618511   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:23.512698   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:22.264134   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:24.762355   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:22.281244   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:22.281285   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:24.824197   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:24.840967   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:24.841030   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:24.882399   65592 cri.go:89] found id: ""
	I1001 20:25:24.882429   65592 logs.go:276] 0 containers: []
	W1001 20:25:24.882443   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:24.882449   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:24.882497   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:24.935548   65592 cri.go:89] found id: ""
	I1001 20:25:24.935581   65592 logs.go:276] 0 containers: []
	W1001 20:25:24.935590   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:24.935596   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:24.935644   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:24.976931   65592 cri.go:89] found id: ""
	I1001 20:25:24.976958   65592 logs.go:276] 0 containers: []
	W1001 20:25:24.976969   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:24.976976   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:24.977035   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:25.009926   65592 cri.go:89] found id: ""
	I1001 20:25:25.009959   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.009968   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:25.009975   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:25.010039   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:25.043261   65592 cri.go:89] found id: ""
	I1001 20:25:25.043299   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.043310   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:25.043316   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:25.043377   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:25.075177   65592 cri.go:89] found id: ""
	I1001 20:25:25.075205   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.075214   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:25.075221   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:25.075267   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:25.109792   65592 cri.go:89] found id: ""
	I1001 20:25:25.109832   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.109845   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:25.109871   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:25.109942   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:25.148721   65592 cri.go:89] found id: ""
	I1001 20:25:25.148753   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.148763   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:25.148772   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:25.148790   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:25.161802   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:25.161841   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:25.227699   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:25.227732   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:25.227750   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:25.314028   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:25.314075   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:25.354881   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:25.354919   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:23.618792   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:26.118493   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:26.580628   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:27.262584   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:29.761866   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:27.906936   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:27.920745   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:27.920806   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:27.955399   65592 cri.go:89] found id: ""
	I1001 20:25:27.955426   65592 logs.go:276] 0 containers: []
	W1001 20:25:27.955444   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:27.955450   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:27.955503   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:27.993714   65592 cri.go:89] found id: ""
	I1001 20:25:27.993747   65592 logs.go:276] 0 containers: []
	W1001 20:25:27.993759   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:27.993766   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:27.993827   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:28.028439   65592 cri.go:89] found id: ""
	I1001 20:25:28.028475   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.028487   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:28.028494   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:28.028563   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:28.072935   65592 cri.go:89] found id: ""
	I1001 20:25:28.072966   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.072977   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:28.072985   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:28.073050   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:28.107241   65592 cri.go:89] found id: ""
	I1001 20:25:28.107275   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.107285   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:28.107293   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:28.107357   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:28.141382   65592 cri.go:89] found id: ""
	I1001 20:25:28.141412   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.141423   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:28.141431   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:28.141494   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:28.175749   65592 cri.go:89] found id: ""
	I1001 20:25:28.175782   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.175794   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:28.175801   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:28.175864   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:28.214968   65592 cri.go:89] found id: ""
	I1001 20:25:28.214997   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.215006   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:28.215015   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:28.215027   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:28.259588   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:28.259619   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:28.314439   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:28.314480   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:28.327938   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:28.327967   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:28.399479   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:28.399508   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:28.399523   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:30.978863   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:30.991415   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:30.991493   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:31.026443   65592 cri.go:89] found id: ""
	I1001 20:25:31.026480   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.026494   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:31.026513   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:31.026576   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:31.060635   65592 cri.go:89] found id: ""
	I1001 20:25:31.060663   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.060678   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:31.060684   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:31.060743   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:31.095494   65592 cri.go:89] found id: ""
	I1001 20:25:31.095525   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.095533   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:31.095540   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:31.095587   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:31.130693   65592 cri.go:89] found id: ""
	I1001 20:25:31.130718   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.130728   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:31.130741   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:31.130802   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:31.167928   65592 cri.go:89] found id: ""
	I1001 20:25:31.167960   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.167973   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:31.167980   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:31.168033   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:31.202813   65592 cri.go:89] found id: ""
	I1001 20:25:31.202843   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.202855   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:31.202864   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:31.202925   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:31.240424   65592 cri.go:89] found id: ""
	I1001 20:25:31.240459   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.240468   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:31.240474   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:31.240521   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:31.275470   65592 cri.go:89] found id: ""
	I1001 20:25:31.275502   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.275510   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:31.275518   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:31.275529   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:31.329604   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:31.329642   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:31.342695   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:31.342724   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:31.410169   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:31.410275   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:31.410303   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:31.489630   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:31.489677   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:28.118608   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:30.118718   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:32.119227   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:32.660640   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:35.732653   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:31.762062   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:33.764597   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:36.263251   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:34.027406   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:34.039902   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:34.039975   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:34.074992   65592 cri.go:89] found id: ""
	I1001 20:25:34.075025   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.075038   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:34.075045   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:34.075106   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:34.110264   65592 cri.go:89] found id: ""
	I1001 20:25:34.110293   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.110304   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:34.110311   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:34.110371   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:34.147097   65592 cri.go:89] found id: ""
	I1001 20:25:34.147132   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.147143   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:34.147151   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:34.147208   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:34.179453   65592 cri.go:89] found id: ""
	I1001 20:25:34.179481   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.179491   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:34.179500   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:34.179554   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:34.212407   65592 cri.go:89] found id: ""
	I1001 20:25:34.212433   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.212442   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:34.212449   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:34.212495   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:34.244400   65592 cri.go:89] found id: ""
	I1001 20:25:34.244429   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.244440   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:34.244447   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:34.244510   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:34.278423   65592 cri.go:89] found id: ""
	I1001 20:25:34.278448   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.278458   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:34.278464   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:34.278520   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:34.311019   65592 cri.go:89] found id: ""
	I1001 20:25:34.311049   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.311059   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:34.311072   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:34.311083   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:34.347521   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:34.347549   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:34.400717   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:34.400754   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:34.414550   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:34.414576   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:34.486478   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:34.486503   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:34.486519   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:37.071687   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:37.084941   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:37.085025   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:37.119834   65592 cri.go:89] found id: ""
	I1001 20:25:37.119862   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.119870   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:37.119875   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:37.119984   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:37.154795   65592 cri.go:89] found id: ""
	I1001 20:25:37.154832   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.154851   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:37.154867   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:37.154927   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:37.191552   65592 cri.go:89] found id: ""
	I1001 20:25:37.191581   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.191592   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:37.191599   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:37.191670   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:34.119370   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:36.119698   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:38.761540   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:40.762894   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:37.228883   65592 cri.go:89] found id: ""
	I1001 20:25:37.228918   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.228928   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:37.228936   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:37.229000   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:37.263533   65592 cri.go:89] found id: ""
	I1001 20:25:37.263558   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.263568   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:37.263577   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:37.263638   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:37.297367   65592 cri.go:89] found id: ""
	I1001 20:25:37.297401   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.297414   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:37.297422   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:37.297486   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:37.331091   65592 cri.go:89] found id: ""
	I1001 20:25:37.331121   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.331129   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:37.331135   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:37.331202   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:37.364861   65592 cri.go:89] found id: ""
	I1001 20:25:37.364889   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.364897   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:37.364905   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:37.364916   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:37.417507   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:37.417545   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:37.431613   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:37.431646   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:37.497821   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:37.497846   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:37.497861   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:37.578951   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:37.578996   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:40.121350   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:40.134553   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:40.134634   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:40.169277   65592 cri.go:89] found id: ""
	I1001 20:25:40.169313   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.169325   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:40.169333   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:40.169399   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:40.204111   65592 cri.go:89] found id: ""
	I1001 20:25:40.204144   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.204153   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:40.204159   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:40.204206   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:40.237841   65592 cri.go:89] found id: ""
	I1001 20:25:40.237872   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.237880   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:40.237886   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:40.237942   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:40.273081   65592 cri.go:89] found id: ""
	I1001 20:25:40.273108   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.273117   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:40.273123   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:40.273186   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:40.307351   65592 cri.go:89] found id: ""
	I1001 20:25:40.307384   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.307394   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:40.307399   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:40.307462   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:40.340543   65592 cri.go:89] found id: ""
	I1001 20:25:40.340569   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.340578   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:40.340584   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:40.340655   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:40.376070   65592 cri.go:89] found id: ""
	I1001 20:25:40.376112   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.376123   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:40.376130   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:40.376194   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:40.410236   65592 cri.go:89] found id: ""
	I1001 20:25:40.410267   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.410279   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:40.410289   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:40.410300   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:40.463799   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:40.463835   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:40.478403   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:40.478436   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:40.547250   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:40.547279   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:40.547291   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:40.630061   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:40.630098   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:38.617891   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:40.618430   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:41.612771   65263 pod_ready.go:82] duration metric: took 4m0.000338317s for pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace to be "Ready" ...
	E1001 20:25:41.612803   65263 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace to be "Ready" (will not retry!)
	I1001 20:25:41.612832   65263 pod_ready.go:39] duration metric: took 4m13.169141642s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:25:41.612859   65263 kubeadm.go:597] duration metric: took 4m21.203039001s to restartPrimaryControlPlane
	W1001 20:25:41.612919   65263 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1001 20:25:41.612944   65263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 20:25:41.812689   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:44.884661   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:43.264334   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:45.762034   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:43.170764   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:43.183046   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:43.183124   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:43.222995   65592 cri.go:89] found id: ""
	I1001 20:25:43.223029   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.223038   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:43.223044   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:43.223105   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:43.256861   65592 cri.go:89] found id: ""
	I1001 20:25:43.256891   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.256902   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:43.256910   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:43.257002   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:43.292643   65592 cri.go:89] found id: ""
	I1001 20:25:43.292687   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.292698   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:43.292704   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:43.292754   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:43.326539   65592 cri.go:89] found id: ""
	I1001 20:25:43.326568   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.326576   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:43.326582   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:43.326628   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:43.359787   65592 cri.go:89] found id: ""
	I1001 20:25:43.359813   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.359822   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:43.359828   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:43.359890   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:43.392045   65592 cri.go:89] found id: ""
	I1001 20:25:43.392076   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.392086   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:43.392092   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:43.392145   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:43.429498   65592 cri.go:89] found id: ""
	I1001 20:25:43.429529   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.429538   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:43.429544   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:43.429591   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:43.462728   65592 cri.go:89] found id: ""
	I1001 20:25:43.462760   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.462771   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:43.462781   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:43.462798   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:43.512683   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:43.512717   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:43.527253   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:43.527285   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:43.598963   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:43.598989   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:43.599003   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:43.679743   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:43.679790   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:46.217101   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:46.230349   65592 kubeadm.go:597] duration metric: took 4m1.895228035s to restartPrimaryControlPlane
	W1001 20:25:46.230421   65592 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1001 20:25:46.230450   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 20:25:47.762241   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:49.763115   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:47.271291   65592 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.040818559s)
	I1001 20:25:47.271362   65592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:25:47.285083   65592 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:25:47.295774   65592 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:25:47.305487   65592 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:25:47.305511   65592 kubeadm.go:157] found existing configuration files:
	
	I1001 20:25:47.305568   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:25:47.314488   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:25:47.314573   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:25:47.323852   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:25:47.332496   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:25:47.332553   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:25:47.341236   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:25:47.349932   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:25:47.350002   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:25:47.359345   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:25:47.369180   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:25:47.369233   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:25:47.378232   65592 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:25:47.595501   65592 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:25:50.964640   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:54.036635   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:52.261890   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:54.761886   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:00.116640   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:57.261837   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:59.262445   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:01.262529   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:03.188675   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:03.762361   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:06.261749   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:07.708438   65263 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.095470945s)
	I1001 20:26:07.708514   65263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:26:07.722982   65263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:26:07.732118   65263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:26:07.741172   65263 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:26:07.741198   65263 kubeadm.go:157] found existing configuration files:
	
	I1001 20:26:07.741244   65263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:26:07.749683   65263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:26:07.749744   65263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:26:07.758875   65263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:26:07.767668   65263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:26:07.767739   65263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:26:07.776648   65263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:26:07.785930   65263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:26:07.785982   65263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:26:07.794739   65263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:26:07.803180   65263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:26:07.803241   65263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:26:07.812178   65263 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:26:07.851817   65263 kubeadm.go:310] W1001 20:26:07.836874    2546 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:26:07.852402   65263 kubeadm.go:310] W1001 20:26:07.837670    2546 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:26:09.272541   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:08.761247   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:10.761797   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:07.957551   65263 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:26:12.344653   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:16.385918   65263 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 20:26:16.385979   65263 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:26:16.386062   65263 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:26:16.386172   65263 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:26:16.386297   65263 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 20:26:16.386400   65263 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:26:16.387827   65263 out.go:235]   - Generating certificates and keys ...
	I1001 20:26:16.387909   65263 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:26:16.387989   65263 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:26:16.388104   65263 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 20:26:16.388191   65263 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 20:26:16.388284   65263 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 20:26:16.388370   65263 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 20:26:16.388464   65263 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 20:26:16.388545   65263 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 20:26:16.388646   65263 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 20:26:16.388775   65263 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 20:26:16.388824   65263 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 20:26:16.388908   65263 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:26:16.388956   65263 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:26:16.389006   65263 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 20:26:16.389048   65263 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:26:16.389117   65263 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:26:16.389201   65263 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:26:16.389333   65263 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:26:16.389444   65263 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:26:16.390823   65263 out.go:235]   - Booting up control plane ...
	I1001 20:26:16.390917   65263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:26:16.390992   65263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:26:16.391061   65263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:26:16.391161   65263 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:26:16.391285   65263 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:26:16.391335   65263 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:26:16.391468   65263 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 20:26:16.391572   65263 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 20:26:16.391628   65263 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.349149ms
	I1001 20:26:16.391686   65263 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 20:26:16.391736   65263 kubeadm.go:310] [api-check] The API server is healthy after 5.002046172s
	I1001 20:26:16.391818   65263 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 20:26:16.391923   65263 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 20:26:16.391999   65263 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 20:26:16.392169   65263 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-106982 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 20:26:16.392225   65263 kubeadm.go:310] [bootstrap-token] Using token: xlxn2k.owwnzt3amr4nx0st
	I1001 20:26:16.393437   65263 out.go:235]   - Configuring RBAC rules ...
	I1001 20:26:16.393539   65263 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 20:26:16.393609   65263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 20:26:16.393722   65263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 20:26:16.393834   65263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 20:26:16.393940   65263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 20:26:16.394017   65263 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 20:26:16.394117   65263 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 20:26:16.394154   65263 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 20:26:16.394195   65263 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 20:26:16.394200   65263 kubeadm.go:310] 
	I1001 20:26:16.394259   65263 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 20:26:16.394269   65263 kubeadm.go:310] 
	I1001 20:26:16.394335   65263 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 20:26:16.394341   65263 kubeadm.go:310] 
	I1001 20:26:16.394363   65263 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 20:26:16.394440   65263 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 20:26:16.394496   65263 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 20:26:16.394502   65263 kubeadm.go:310] 
	I1001 20:26:16.394553   65263 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 20:26:16.394559   65263 kubeadm.go:310] 
	I1001 20:26:16.394601   65263 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 20:26:16.394611   65263 kubeadm.go:310] 
	I1001 20:26:16.394656   65263 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 20:26:16.394720   65263 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 20:26:16.394804   65263 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 20:26:16.394814   65263 kubeadm.go:310] 
	I1001 20:26:16.394901   65263 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 20:26:16.394996   65263 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 20:26:16.395010   65263 kubeadm.go:310] 
	I1001 20:26:16.395128   65263 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xlxn2k.owwnzt3amr4nx0st \
	I1001 20:26:16.395262   65263 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 \
	I1001 20:26:16.395299   65263 kubeadm.go:310] 	--control-plane 
	I1001 20:26:16.395308   65263 kubeadm.go:310] 
	I1001 20:26:16.395426   65263 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 20:26:16.395436   65263 kubeadm.go:310] 
	I1001 20:26:16.395548   65263 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xlxn2k.owwnzt3amr4nx0st \
	I1001 20:26:16.395648   65263 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 
	I1001 20:26:16.395658   65263 cni.go:84] Creating CNI manager for ""
	I1001 20:26:16.395665   65263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:26:16.396852   65263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 20:26:12.763435   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:15.262381   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:16.398081   65263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 20:26:16.407920   65263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 20:26:16.428213   65263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 20:26:16.428312   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:16.428344   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-106982 minikube.k8s.io/updated_at=2024_10_01T20_26_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=embed-certs-106982 minikube.k8s.io/primary=true
	I1001 20:26:16.667876   65263 ops.go:34] apiserver oom_adj: -16
	I1001 20:26:16.667891   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:17.168194   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:17.668772   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:18.168815   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:18.668087   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:19.168767   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:19.668624   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:20.167974   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:20.668002   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:20.758486   65263 kubeadm.go:1113] duration metric: took 4.330238814s to wait for elevateKubeSystemPrivileges
	I1001 20:26:20.758520   65263 kubeadm.go:394] duration metric: took 5m0.403602376s to StartCluster
	I1001 20:26:20.758539   65263 settings.go:142] acquiring lock: {Name:mkeb3fe64ed992373f048aef24eb0d675bcab60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:26:20.758613   65263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:26:20.760430   65263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/kubeconfig: {Name:mk7ef19d546928001abf2478a3abe1d17765a591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:26:20.760678   65263 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 20:26:20.760746   65263 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 20:26:20.760852   65263 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-106982"
	I1001 20:26:20.760881   65263 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-106982"
	I1001 20:26:20.760877   65263 addons.go:69] Setting default-storageclass=true in profile "embed-certs-106982"
	W1001 20:26:20.760893   65263 addons.go:243] addon storage-provisioner should already be in state true
	I1001 20:26:20.760891   65263 addons.go:69] Setting metrics-server=true in profile "embed-certs-106982"
	I1001 20:26:20.760926   65263 host.go:66] Checking if "embed-certs-106982" exists ...
	I1001 20:26:20.760926   65263 addons.go:234] Setting addon metrics-server=true in "embed-certs-106982"
	W1001 20:26:20.761009   65263 addons.go:243] addon metrics-server should already be in state true
	I1001 20:26:20.761041   65263 host.go:66] Checking if "embed-certs-106982" exists ...
	I1001 20:26:20.760906   65263 config.go:182] Loaded profile config "embed-certs-106982": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:26:20.760902   65263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-106982"
	I1001 20:26:20.761374   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.761426   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.761429   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.761468   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.761545   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.761591   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.762861   65263 out.go:177] * Verifying Kubernetes components...
	I1001 20:26:20.764393   65263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:26:20.778448   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38619
	I1001 20:26:20.779031   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.779198   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39861
	I1001 20:26:20.779632   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.779657   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.779822   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.780085   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.780331   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.780352   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.780789   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.780829   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.781030   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.781240   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetState
	I1001 20:26:20.781260   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37397
	I1001 20:26:20.781672   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.782168   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.782189   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.782587   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.783037   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.783073   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.784573   65263 addons.go:234] Setting addon default-storageclass=true in "embed-certs-106982"
	W1001 20:26:20.784589   65263 addons.go:243] addon default-storageclass should already be in state true
	I1001 20:26:20.784609   65263 host.go:66] Checking if "embed-certs-106982" exists ...
	I1001 20:26:20.784877   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.784912   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.797787   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37113
	I1001 20:26:20.797864   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33201
	I1001 20:26:20.798261   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.798311   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.798836   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.798855   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.798931   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.798951   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.799226   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I1001 20:26:20.799230   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.799367   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.799409   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetState
	I1001 20:26:20.799515   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetState
	I1001 20:26:20.799695   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.800114   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.800130   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.800602   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.801316   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.801331   65263 main.go:141] libmachine: (embed-certs-106982) Calling .DriverName
	I1001 20:26:20.801351   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.801391   65263 main.go:141] libmachine: (embed-certs-106982) Calling .DriverName
	I1001 20:26:20.803237   65263 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1001 20:26:20.803241   65263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:26:18.420597   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:17.762869   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:20.262479   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:20.804378   65263 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1001 20:26:20.804394   65263 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1001 20:26:20.804411   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHHostname
	I1001 20:26:20.804571   65263 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:26:20.804586   65263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 20:26:20.804603   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHHostname
	I1001 20:26:20.808458   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.808866   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.808906   65263 main.go:141] libmachine: (embed-certs-106982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:ab:67", ip: ""} in network mk-embed-certs-106982: {Iface:virbr1 ExpiryTime:2024-10-01 21:21:05 +0000 UTC Type:0 Mac:52:54:00:7f:ab:67 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:embed-certs-106982 Clientid:01:52:54:00:7f:ab:67}
	I1001 20:26:20.808923   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined IP address 192.168.39.203 and MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.809183   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHPort
	I1001 20:26:20.809326   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHKeyPath
	I1001 20:26:20.809462   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHUsername
	I1001 20:26:20.809582   65263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/embed-certs-106982/id_rsa Username:docker}
	I1001 20:26:20.809917   65263 main.go:141] libmachine: (embed-certs-106982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:ab:67", ip: ""} in network mk-embed-certs-106982: {Iface:virbr1 ExpiryTime:2024-10-01 21:21:05 +0000 UTC Type:0 Mac:52:54:00:7f:ab:67 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:embed-certs-106982 Clientid:01:52:54:00:7f:ab:67}
	I1001 20:26:20.809941   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined IP address 192.168.39.203 and MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.809975   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHPort
	I1001 20:26:20.810172   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHKeyPath
	I1001 20:26:20.810320   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHUsername
	I1001 20:26:20.810498   65263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/embed-certs-106982/id_rsa Username:docker}
	I1001 20:26:20.818676   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33699
	I1001 20:26:20.819066   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.819574   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.819596   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.819900   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.820110   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetState
	I1001 20:26:20.821633   65263 main.go:141] libmachine: (embed-certs-106982) Calling .DriverName
	I1001 20:26:20.821820   65263 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 20:26:20.821834   65263 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 20:26:20.821852   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHHostname
	I1001 20:26:20.824684   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.825165   65263 main.go:141] libmachine: (embed-certs-106982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:ab:67", ip: ""} in network mk-embed-certs-106982: {Iface:virbr1 ExpiryTime:2024-10-01 21:21:05 +0000 UTC Type:0 Mac:52:54:00:7f:ab:67 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:embed-certs-106982 Clientid:01:52:54:00:7f:ab:67}
	I1001 20:26:20.825205   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined IP address 192.168.39.203 and MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.825425   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHPort
	I1001 20:26:20.825577   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHKeyPath
	I1001 20:26:20.825697   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHUsername
	I1001 20:26:20.825835   65263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/embed-certs-106982/id_rsa Username:docker}
	I1001 20:26:20.984756   65263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:26:21.014051   65263 node_ready.go:35] waiting up to 6m0s for node "embed-certs-106982" to be "Ready" ...
	I1001 20:26:21.023227   65263 node_ready.go:49] node "embed-certs-106982" has status "Ready":"True"
	I1001 20:26:21.023274   65263 node_ready.go:38] duration metric: took 9.170523ms for node "embed-certs-106982" to be "Ready" ...
	I1001 20:26:21.023286   65263 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:26:21.029371   65263 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:21.113480   65263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1001 20:26:21.113509   65263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1001 20:26:21.138000   65263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1001 20:26:21.138028   65263 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1001 20:26:21.162057   65263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:26:21.240772   65263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 20:26:21.251310   65263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 20:26:21.251337   65263 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1001 20:26:21.316994   65263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 20:26:22.282775   65263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.041963655s)
	I1001 20:26:22.282809   65263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.120713974s)
	I1001 20:26:22.282835   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.282849   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.282849   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.282864   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.283226   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.283243   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.283256   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.283265   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.283244   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.283298   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.283311   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.283275   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.283278   65263 main.go:141] libmachine: (embed-certs-106982) DBG | Closing plugin on server side
	I1001 20:26:22.283808   65263 main.go:141] libmachine: (embed-certs-106982) DBG | Closing plugin on server side
	I1001 20:26:22.283808   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.283839   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.283892   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.283907   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.342382   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.342407   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.342708   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.342732   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.434882   65263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.117844425s)
	I1001 20:26:22.434937   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.434950   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.435276   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.435291   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.435301   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.435309   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.435554   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.435582   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.435593   65263 addons.go:475] Verifying addon metrics-server=true in "embed-certs-106982"
	I1001 20:26:22.437796   65263 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1001 20:26:22.438856   65263 addons.go:510] duration metric: took 1.678119807s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1001 20:26:21.492616   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:22.263077   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:24.761931   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:23.036676   65263 pod_ready.go:103] pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:25.537836   65263 pod_ready.go:103] pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:26.536827   65263 pod_ready.go:93] pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:26.536853   65263 pod_ready.go:82] duration metric: took 5.507455172s for pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:26.536865   65263 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wfdwp" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:26.541397   65263 pod_ready.go:93] pod "coredns-7c65d6cfc9-wfdwp" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:26.541427   65263 pod_ready.go:82] duration metric: took 4.554335ms for pod "coredns-7c65d6cfc9-wfdwp" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:26.541436   65263 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.048586   65263 pod_ready.go:93] pod "etcd-embed-certs-106982" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.048612   65263 pod_ready.go:82] duration metric: took 507.170207ms for pod "etcd-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.048622   65263 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.053967   65263 pod_ready.go:93] pod "kube-apiserver-embed-certs-106982" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.053994   65263 pod_ready.go:82] duration metric: took 5.365871ms for pod "kube-apiserver-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.054007   65263 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.059419   65263 pod_ready.go:93] pod "kube-controller-manager-embed-certs-106982" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.059441   65263 pod_ready.go:82] duration metric: took 5.427863ms for pod "kube-controller-manager-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.059452   65263 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fjnvc" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.333488   65263 pod_ready.go:93] pod "kube-proxy-fjnvc" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.333512   65263 pod_ready.go:82] duration metric: took 274.054021ms for pod "kube-proxy-fjnvc" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.333521   65263 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.733368   65263 pod_ready.go:93] pod "kube-scheduler-embed-certs-106982" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.733392   65263 pod_ready.go:82] duration metric: took 399.861423ms for pod "kube-scheduler-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.733400   65263 pod_ready.go:39] duration metric: took 6.710101442s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:26:27.733422   65263 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:26:27.733476   65263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:26:27.750336   65263 api_server.go:72] duration metric: took 6.989620923s to wait for apiserver process to appear ...
	I1001 20:26:27.750367   65263 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:26:27.750389   65263 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I1001 20:26:27.755350   65263 api_server.go:279] https://192.168.39.203:8443/healthz returned 200:
	ok
	I1001 20:26:27.756547   65263 api_server.go:141] control plane version: v1.31.1
	I1001 20:26:27.756572   65263 api_server.go:131] duration metric: took 6.196295ms to wait for apiserver health ...
	I1001 20:26:27.756583   65263 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:26:27.937329   65263 system_pods.go:59] 9 kube-system pods found
	I1001 20:26:27.937364   65263 system_pods.go:61] "coredns-7c65d6cfc9-rq5ms" [652fcc3d-ae12-4e11-b212-8891c1c05701] Running
	I1001 20:26:27.937373   65263 system_pods.go:61] "coredns-7c65d6cfc9-wfdwp" [1174cd48-6855-4813-9ecd-3b3a82386720] Running
	I1001 20:26:27.937380   65263 system_pods.go:61] "etcd-embed-certs-106982" [84d678ad-7322-48d0-8bab-6c683d3cf8a5] Running
	I1001 20:26:27.937386   65263 system_pods.go:61] "kube-apiserver-embed-certs-106982" [93d7fba8-306f-4b04-b65b-e3d4442f9ba6] Running
	I1001 20:26:27.937392   65263 system_pods.go:61] "kube-controller-manager-embed-certs-106982" [5e405af0-a942-4040-a955-8a007c2fc6e9] Running
	I1001 20:26:27.937396   65263 system_pods.go:61] "kube-proxy-fjnvc" [728b1b90-5961-45e9-9818-8fc6f6db1634] Running
	I1001 20:26:27.937402   65263 system_pods.go:61] "kube-scheduler-embed-certs-106982" [c0289891-9235-44de-a3cb-669648f5c18e] Running
	I1001 20:26:27.937416   65263 system_pods.go:61] "metrics-server-6867b74b74-z27sl" [dd1b4cdc-5ffb-4214-96c9-569ef4f7ba09] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:26:27.937427   65263 system_pods.go:61] "storage-provisioner" [3aaab1f2-8361-46c6-88be-ed9004628715] Running
	I1001 20:26:27.937441   65263 system_pods.go:74] duration metric: took 180.849735ms to wait for pod list to return data ...
	I1001 20:26:27.937453   65263 default_sa.go:34] waiting for default service account to be created ...
	I1001 20:26:28.133918   65263 default_sa.go:45] found service account: "default"
	I1001 20:26:28.133945   65263 default_sa.go:55] duration metric: took 196.482206ms for default service account to be created ...
	I1001 20:26:28.133955   65263 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 20:26:28.335883   65263 system_pods.go:86] 9 kube-system pods found
	I1001 20:26:28.335916   65263 system_pods.go:89] "coredns-7c65d6cfc9-rq5ms" [652fcc3d-ae12-4e11-b212-8891c1c05701] Running
	I1001 20:26:28.335923   65263 system_pods.go:89] "coredns-7c65d6cfc9-wfdwp" [1174cd48-6855-4813-9ecd-3b3a82386720] Running
	I1001 20:26:28.335927   65263 system_pods.go:89] "etcd-embed-certs-106982" [84d678ad-7322-48d0-8bab-6c683d3cf8a5] Running
	I1001 20:26:28.335931   65263 system_pods.go:89] "kube-apiserver-embed-certs-106982" [93d7fba8-306f-4b04-b65b-e3d4442f9ba6] Running
	I1001 20:26:28.335935   65263 system_pods.go:89] "kube-controller-manager-embed-certs-106982" [5e405af0-a942-4040-a955-8a007c2fc6e9] Running
	I1001 20:26:28.335939   65263 system_pods.go:89] "kube-proxy-fjnvc" [728b1b90-5961-45e9-9818-8fc6f6db1634] Running
	I1001 20:26:28.335942   65263 system_pods.go:89] "kube-scheduler-embed-certs-106982" [c0289891-9235-44de-a3cb-669648f5c18e] Running
	I1001 20:26:28.335947   65263 system_pods.go:89] "metrics-server-6867b74b74-z27sl" [dd1b4cdc-5ffb-4214-96c9-569ef4f7ba09] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:26:28.335951   65263 system_pods.go:89] "storage-provisioner" [3aaab1f2-8361-46c6-88be-ed9004628715] Running
	I1001 20:26:28.335959   65263 system_pods.go:126] duration metric: took 202.000148ms to wait for k8s-apps to be running ...
	I1001 20:26:28.335967   65263 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 20:26:28.336013   65263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:26:28.350578   65263 system_svc.go:56] duration metric: took 14.603568ms WaitForService to wait for kubelet
	I1001 20:26:28.350608   65263 kubeadm.go:582] duration metric: took 7.589898283s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:26:28.350630   65263 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:26:28.533508   65263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:26:28.533533   65263 node_conditions.go:123] node cpu capacity is 2
	I1001 20:26:28.533544   65263 node_conditions.go:105] duration metric: took 182.908473ms to run NodePressure ...
	I1001 20:26:28.533554   65263 start.go:241] waiting for startup goroutines ...
	I1001 20:26:28.533561   65263 start.go:246] waiting for cluster config update ...
	I1001 20:26:28.533571   65263 start.go:255] writing updated cluster config ...
	I1001 20:26:28.533862   65263 ssh_runner.go:195] Run: rm -f paused
	I1001 20:26:28.580991   65263 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 20:26:28.583612   65263 out.go:177] * Done! kubectl is now configured to use "embed-certs-106982" cluster and "default" namespace by default
	I1001 20:26:27.572585   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:30.648588   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:27.262297   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:29.761795   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:31.762340   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:34.261713   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:35.263742   64676 pod_ready.go:82] duration metric: took 4m0.008218565s for pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace to be "Ready" ...
	E1001 20:26:35.263766   64676 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1001 20:26:35.263774   64676 pod_ready.go:39] duration metric: took 4m6.044360969s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:26:35.263791   64676 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:26:35.263820   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:26:35.263879   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:26:35.314427   64676 cri.go:89] found id: "a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:35.314450   64676 cri.go:89] found id: ""
	I1001 20:26:35.314457   64676 logs.go:276] 1 containers: [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd]
	I1001 20:26:35.314510   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.319554   64676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:26:35.319627   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:26:35.352986   64676 cri.go:89] found id: "586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:35.353006   64676 cri.go:89] found id: ""
	I1001 20:26:35.353013   64676 logs.go:276] 1 containers: [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f]
	I1001 20:26:35.353061   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.356979   64676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:26:35.357044   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:26:35.397175   64676 cri.go:89] found id: "4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:35.397196   64676 cri.go:89] found id: ""
	I1001 20:26:35.397203   64676 logs.go:276] 1 containers: [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6]
	I1001 20:26:35.397250   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.401025   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:26:35.401108   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:26:35.434312   64676 cri.go:89] found id: "89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:35.434333   64676 cri.go:89] found id: ""
	I1001 20:26:35.434340   64676 logs.go:276] 1 containers: [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d]
	I1001 20:26:35.434400   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.438325   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:26:35.438385   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:26:35.480711   64676 cri.go:89] found id: "fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:35.480738   64676 cri.go:89] found id: ""
	I1001 20:26:35.480750   64676 logs.go:276] 1 containers: [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8]
	I1001 20:26:35.480795   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.484996   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:26:35.485073   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:26:35.524876   64676 cri.go:89] found id: "69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:35.524909   64676 cri.go:89] found id: ""
	I1001 20:26:35.524920   64676 logs.go:276] 1 containers: [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf]
	I1001 20:26:35.524984   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.529297   64676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:26:35.529366   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:26:35.564110   64676 cri.go:89] found id: ""
	I1001 20:26:35.564138   64676 logs.go:276] 0 containers: []
	W1001 20:26:35.564149   64676 logs.go:278] No container was found matching "kindnet"
	I1001 20:26:35.564157   64676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1001 20:26:35.564222   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1001 20:26:35.599279   64676 cri.go:89] found id: "a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:35.599311   64676 cri.go:89] found id: "652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:35.599318   64676 cri.go:89] found id: ""
	I1001 20:26:35.599327   64676 logs.go:276] 2 containers: [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b]
	I1001 20:26:35.599379   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.603377   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.607668   64676 logs.go:123] Gathering logs for kubelet ...
	I1001 20:26:35.607698   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:26:35.678017   64676 logs.go:123] Gathering logs for coredns [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6] ...
	I1001 20:26:35.678053   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:35.717814   64676 logs.go:123] Gathering logs for kube-proxy [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8] ...
	I1001 20:26:35.717842   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:35.752647   64676 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:26:35.752680   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:26:36.259582   64676 logs.go:123] Gathering logs for container status ...
	I1001 20:26:36.259630   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:26:36.299857   64676 logs.go:123] Gathering logs for storage-provisioner [652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b] ...
	I1001 20:26:36.299892   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:36.339923   64676 logs.go:123] Gathering logs for dmesg ...
	I1001 20:26:36.339973   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:26:36.353728   64676 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:26:36.353763   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 20:26:36.728608   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:39.796591   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:36.482029   64676 logs.go:123] Gathering logs for kube-apiserver [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd] ...
	I1001 20:26:36.482071   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:36.525705   64676 logs.go:123] Gathering logs for etcd [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f] ...
	I1001 20:26:36.525741   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:36.566494   64676 logs.go:123] Gathering logs for kube-scheduler [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d] ...
	I1001 20:26:36.566529   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:36.602489   64676 logs.go:123] Gathering logs for kube-controller-manager [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf] ...
	I1001 20:26:36.602523   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:36.666726   64676 logs.go:123] Gathering logs for storage-provisioner [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa] ...
	I1001 20:26:36.666757   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:39.203217   64676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:26:39.220220   64676 api_server.go:72] duration metric: took 4m17.274155342s to wait for apiserver process to appear ...
	I1001 20:26:39.220253   64676 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:26:39.220301   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:26:39.220372   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:26:39.261710   64676 cri.go:89] found id: "a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:39.261739   64676 cri.go:89] found id: ""
	I1001 20:26:39.261749   64676 logs.go:276] 1 containers: [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd]
	I1001 20:26:39.261804   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.265994   64676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:26:39.266057   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:26:39.298615   64676 cri.go:89] found id: "586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:39.298642   64676 cri.go:89] found id: ""
	I1001 20:26:39.298650   64676 logs.go:276] 1 containers: [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f]
	I1001 20:26:39.298694   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.302584   64676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:26:39.302647   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:26:39.338062   64676 cri.go:89] found id: "4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:39.338091   64676 cri.go:89] found id: ""
	I1001 20:26:39.338102   64676 logs.go:276] 1 containers: [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6]
	I1001 20:26:39.338157   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.342553   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:26:39.342613   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:26:39.379787   64676 cri.go:89] found id: "89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:39.379818   64676 cri.go:89] found id: ""
	I1001 20:26:39.379828   64676 logs.go:276] 1 containers: [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d]
	I1001 20:26:39.379885   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.384397   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:26:39.384454   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:26:39.419175   64676 cri.go:89] found id: "fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:39.419204   64676 cri.go:89] found id: ""
	I1001 20:26:39.419215   64676 logs.go:276] 1 containers: [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8]
	I1001 20:26:39.419275   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.423113   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:26:39.423184   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:26:39.455948   64676 cri.go:89] found id: "69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:39.455974   64676 cri.go:89] found id: ""
	I1001 20:26:39.455984   64676 logs.go:276] 1 containers: [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf]
	I1001 20:26:39.456040   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.459912   64676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:26:39.459978   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:26:39.504152   64676 cri.go:89] found id: ""
	I1001 20:26:39.504179   64676 logs.go:276] 0 containers: []
	W1001 20:26:39.504187   64676 logs.go:278] No container was found matching "kindnet"
	I1001 20:26:39.504192   64676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1001 20:26:39.504241   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1001 20:26:39.538918   64676 cri.go:89] found id: "a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:39.538940   64676 cri.go:89] found id: "652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:39.538947   64676 cri.go:89] found id: ""
	I1001 20:26:39.538957   64676 logs.go:276] 2 containers: [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b]
	I1001 20:26:39.539013   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.542832   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.546365   64676 logs.go:123] Gathering logs for storage-provisioner [652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b] ...
	I1001 20:26:39.546395   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:39.589286   64676 logs.go:123] Gathering logs for kubelet ...
	I1001 20:26:39.589320   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:26:39.657412   64676 logs.go:123] Gathering logs for dmesg ...
	I1001 20:26:39.657447   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:26:39.671553   64676 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:26:39.671581   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 20:26:39.786194   64676 logs.go:123] Gathering logs for kube-apiserver [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd] ...
	I1001 20:26:39.786226   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:39.829798   64676 logs.go:123] Gathering logs for coredns [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6] ...
	I1001 20:26:39.829831   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:39.865854   64676 logs.go:123] Gathering logs for kube-controller-manager [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf] ...
	I1001 20:26:39.865890   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:39.920702   64676 logs.go:123] Gathering logs for storage-provisioner [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa] ...
	I1001 20:26:39.920735   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:39.959343   64676 logs.go:123] Gathering logs for etcd [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f] ...
	I1001 20:26:39.959375   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:40.001320   64676 logs.go:123] Gathering logs for kube-scheduler [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d] ...
	I1001 20:26:40.001354   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:40.037182   64676 logs.go:123] Gathering logs for kube-proxy [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8] ...
	I1001 20:26:40.037214   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:40.070072   64676 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:26:40.070098   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:26:40.492733   64676 logs.go:123] Gathering logs for container status ...
	I1001 20:26:40.492770   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:26:43.042801   64676 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I1001 20:26:43.048223   64676 api_server.go:279] https://192.168.61.93:8443/healthz returned 200:
	ok
	I1001 20:26:43.049199   64676 api_server.go:141] control plane version: v1.31.1
	I1001 20:26:43.049229   64676 api_server.go:131] duration metric: took 3.828968104s to wait for apiserver health ...
	I1001 20:26:43.049239   64676 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:26:43.049267   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:26:43.049331   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:26:43.087098   64676 cri.go:89] found id: "a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:43.087132   64676 cri.go:89] found id: ""
	I1001 20:26:43.087144   64676 logs.go:276] 1 containers: [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd]
	I1001 20:26:43.087206   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.091606   64676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:26:43.091665   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:26:43.127154   64676 cri.go:89] found id: "586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:43.127177   64676 cri.go:89] found id: ""
	I1001 20:26:43.127184   64676 logs.go:276] 1 containers: [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f]
	I1001 20:26:43.127227   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.131246   64676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:26:43.131320   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:26:43.165473   64676 cri.go:89] found id: "4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:43.165503   64676 cri.go:89] found id: ""
	I1001 20:26:43.165514   64676 logs.go:276] 1 containers: [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6]
	I1001 20:26:43.165577   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.169908   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:26:43.169982   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:26:43.210196   64676 cri.go:89] found id: "89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:43.210225   64676 cri.go:89] found id: ""
	I1001 20:26:43.210235   64676 logs.go:276] 1 containers: [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d]
	I1001 20:26:43.210302   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.214253   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:26:43.214317   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:26:43.249533   64676 cri.go:89] found id: "fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:43.249555   64676 cri.go:89] found id: ""
	I1001 20:26:43.249563   64676 logs.go:276] 1 containers: [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8]
	I1001 20:26:43.249625   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.253555   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:26:43.253633   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:26:43.294711   64676 cri.go:89] found id: "69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:43.294734   64676 cri.go:89] found id: ""
	I1001 20:26:43.294742   64676 logs.go:276] 1 containers: [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf]
	I1001 20:26:43.294787   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.298960   64676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:26:43.299037   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:26:43.339542   64676 cri.go:89] found id: ""
	I1001 20:26:43.339572   64676 logs.go:276] 0 containers: []
	W1001 20:26:43.339582   64676 logs.go:278] No container was found matching "kindnet"
	I1001 20:26:43.339588   64676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1001 20:26:43.339667   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1001 20:26:43.382206   64676 cri.go:89] found id: "a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:43.382230   64676 cri.go:89] found id: "652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:43.382234   64676 cri.go:89] found id: ""
	I1001 20:26:43.382241   64676 logs.go:276] 2 containers: [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b]
	I1001 20:26:43.382289   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.386473   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.390146   64676 logs.go:123] Gathering logs for kubelet ...
	I1001 20:26:43.390172   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:26:43.457659   64676 logs.go:123] Gathering logs for dmesg ...
	I1001 20:26:43.457699   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:26:43.471078   64676 logs.go:123] Gathering logs for kube-apiserver [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd] ...
	I1001 20:26:43.471109   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:43.518058   64676 logs.go:123] Gathering logs for etcd [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f] ...
	I1001 20:26:43.518093   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:43.559757   64676 logs.go:123] Gathering logs for kube-proxy [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8] ...
	I1001 20:26:43.559788   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:43.595485   64676 logs.go:123] Gathering logs for storage-provisioner [652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b] ...
	I1001 20:26:43.595513   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:43.628167   64676 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:26:43.628195   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 20:26:43.741206   64676 logs.go:123] Gathering logs for coredns [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6] ...
	I1001 20:26:43.741234   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:43.777220   64676 logs.go:123] Gathering logs for kube-scheduler [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d] ...
	I1001 20:26:43.777248   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:43.817507   64676 logs.go:123] Gathering logs for kube-controller-manager [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf] ...
	I1001 20:26:43.817536   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:43.880127   64676 logs.go:123] Gathering logs for storage-provisioner [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa] ...
	I1001 20:26:43.880161   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:43.915172   64676 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:26:43.915199   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:26:44.289237   64676 logs.go:123] Gathering logs for container status ...
	I1001 20:26:44.289277   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:26:46.835363   64676 system_pods.go:59] 8 kube-system pods found
	I1001 20:26:46.835393   64676 system_pods.go:61] "coredns-7c65d6cfc9-g8jf8" [7fbddef1-a564-4ee8-ab53-ae838d0fd984] Running
	I1001 20:26:46.835398   64676 system_pods.go:61] "etcd-no-preload-262337" [086d7949-d20d-49d8-871d-a464de60e4cb] Running
	I1001 20:26:46.835402   64676 system_pods.go:61] "kube-apiserver-no-preload-262337" [d8473136-4e07-43e2-bd20-65232e2d5102] Running
	I1001 20:26:46.835405   64676 system_pods.go:61] "kube-controller-manager-no-preload-262337" [63c7d071-20cd-48c5-b410-b78e339b0731] Running
	I1001 20:26:46.835408   64676 system_pods.go:61] "kube-proxy-7rrkn" [e25a055c-0203-4fe7-8801-560b9cdb27bb] Running
	I1001 20:26:46.835412   64676 system_pods.go:61] "kube-scheduler-no-preload-262337" [3b962e64-eea6-4c24-a230-32c40106a4dd] Running
	I1001 20:26:46.835418   64676 system_pods.go:61] "metrics-server-6867b74b74-2rpwt" [235515ab-28fc-437b-983a-243f7a8fb183] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:26:46.835422   64676 system_pods.go:61] "storage-provisioner" [8832193a-39b4-49b9-b943-3241bb27fb8d] Running
	I1001 20:26:46.835431   64676 system_pods.go:74] duration metric: took 3.786183909s to wait for pod list to return data ...
	I1001 20:26:46.835441   64676 default_sa.go:34] waiting for default service account to be created ...
	I1001 20:26:46.838345   64676 default_sa.go:45] found service account: "default"
	I1001 20:26:46.838367   64676 default_sa.go:55] duration metric: took 2.918089ms for default service account to be created ...
	I1001 20:26:46.838375   64676 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 20:26:46.844822   64676 system_pods.go:86] 8 kube-system pods found
	I1001 20:26:46.844850   64676 system_pods.go:89] "coredns-7c65d6cfc9-g8jf8" [7fbddef1-a564-4ee8-ab53-ae838d0fd984] Running
	I1001 20:26:46.844856   64676 system_pods.go:89] "etcd-no-preload-262337" [086d7949-d20d-49d8-871d-a464de60e4cb] Running
	I1001 20:26:46.844860   64676 system_pods.go:89] "kube-apiserver-no-preload-262337" [d8473136-4e07-43e2-bd20-65232e2d5102] Running
	I1001 20:26:46.844863   64676 system_pods.go:89] "kube-controller-manager-no-preload-262337" [63c7d071-20cd-48c5-b410-b78e339b0731] Running
	I1001 20:26:46.844867   64676 system_pods.go:89] "kube-proxy-7rrkn" [e25a055c-0203-4fe7-8801-560b9cdb27bb] Running
	I1001 20:26:46.844870   64676 system_pods.go:89] "kube-scheduler-no-preload-262337" [3b962e64-eea6-4c24-a230-32c40106a4dd] Running
	I1001 20:26:46.844876   64676 system_pods.go:89] "metrics-server-6867b74b74-2rpwt" [235515ab-28fc-437b-983a-243f7a8fb183] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:26:46.844881   64676 system_pods.go:89] "storage-provisioner" [8832193a-39b4-49b9-b943-3241bb27fb8d] Running
	I1001 20:26:46.844889   64676 system_pods.go:126] duration metric: took 6.508902ms to wait for k8s-apps to be running ...
	I1001 20:26:46.844895   64676 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 20:26:46.844934   64676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:26:46.861543   64676 system_svc.go:56] duration metric: took 16.63712ms WaitForService to wait for kubelet
	I1001 20:26:46.861586   64676 kubeadm.go:582] duration metric: took 4m24.915538002s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:26:46.861614   64676 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:26:46.864599   64676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:26:46.864632   64676 node_conditions.go:123] node cpu capacity is 2
	I1001 20:26:46.864644   64676 node_conditions.go:105] duration metric: took 3.023838ms to run NodePressure ...
	I1001 20:26:46.864657   64676 start.go:241] waiting for startup goroutines ...
	I1001 20:26:46.864667   64676 start.go:246] waiting for cluster config update ...
	I1001 20:26:46.864682   64676 start.go:255] writing updated cluster config ...
	I1001 20:26:46.864960   64676 ssh_runner.go:195] Run: rm -f paused
	I1001 20:26:46.924982   64676 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 20:26:46.926817   64676 out.go:177] * Done! kubectl is now configured to use "no-preload-262337" cluster and "default" namespace by default
	I1001 20:26:45.880599   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:48.948631   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:55.028660   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:58.100570   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:04.180661   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:07.252656   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:13.332644   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:16.404640   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:22.484714   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:25.556606   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:31.636609   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:34.712725   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:40.788632   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:43.940129   65592 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1001 20:27:43.940232   65592 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1001 20:27:43.942002   65592 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1001 20:27:43.942068   65592 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:27:43.942170   65592 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:27:43.942281   65592 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:27:43.942421   65592 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 20:27:43.942518   65592 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:27:43.944271   65592 out.go:235]   - Generating certificates and keys ...
	I1001 20:27:43.944389   65592 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:27:43.944486   65592 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:27:43.944600   65592 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 20:27:43.944693   65592 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 20:27:43.944797   65592 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 20:27:43.944888   65592 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 20:27:43.944985   65592 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 20:27:43.945072   65592 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 20:27:43.945190   65592 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 20:27:43.945301   65592 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 20:27:43.945361   65592 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 20:27:43.945420   65592 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:27:43.945467   65592 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:27:43.945515   65592 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:27:43.945585   65592 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:27:43.945651   65592 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:27:43.945772   65592 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:27:43.945899   65592 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:27:43.945961   65592 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:27:43.946057   65592 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:27:43.860704   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:43.947517   65592 out.go:235]   - Booting up control plane ...
	I1001 20:27:43.947644   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:27:43.947767   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:27:43.947861   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:27:43.947978   65592 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:27:43.948185   65592 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 20:27:43.948258   65592 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1001 20:27:43.948396   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.948618   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.948695   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.948930   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.948991   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.949149   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.949232   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.949380   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.949439   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.949597   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.949616   65592 kubeadm.go:310] 
	I1001 20:27:43.949658   65592 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1001 20:27:43.949693   65592 kubeadm.go:310] 		timed out waiting for the condition
	I1001 20:27:43.949704   65592 kubeadm.go:310] 
	I1001 20:27:43.949737   65592 kubeadm.go:310] 	This error is likely caused by:
	I1001 20:27:43.949766   65592 kubeadm.go:310] 		- The kubelet is not running
	I1001 20:27:43.949863   65592 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1001 20:27:43.949871   65592 kubeadm.go:310] 
	I1001 20:27:43.949968   65592 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1001 20:27:43.950000   65592 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1001 20:27:43.950034   65592 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1001 20:27:43.950040   65592 kubeadm.go:310] 
	I1001 20:27:43.950136   65592 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1001 20:27:43.950207   65592 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1001 20:27:43.950213   65592 kubeadm.go:310] 
	I1001 20:27:43.950310   65592 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1001 20:27:43.950389   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1001 20:27:43.950454   65592 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1001 20:27:43.950533   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1001 20:27:43.950566   65592 kubeadm.go:310] 
	W1001 20:27:43.950665   65592 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1001 20:27:43.950707   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 20:27:44.404995   65592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:27:44.421130   65592 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:27:44.431204   65592 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:27:44.431228   65592 kubeadm.go:157] found existing configuration files:
	
	I1001 20:27:44.431270   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:27:44.440792   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:27:44.440857   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:27:44.450469   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:27:44.459640   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:27:44.459695   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:27:44.469335   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:27:44.478848   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:27:44.478904   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:27:44.489162   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:27:44.501070   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:27:44.501157   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:27:44.511970   65592 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:27:44.728685   65592 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:27:49.940611   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:53.016657   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:59.092700   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:02.164611   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:08.244707   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:11.316686   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:17.400607   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:20.468660   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:26.548687   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:29.624608   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:35.700638   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:38.772693   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:44.852721   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:47.924690   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:54.004674   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:57.080644   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:03.156750   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:06.232700   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:12.308749   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:15.380633   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:18.381649   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:29:18.381689   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetMachineName
	I1001 20:29:18.382037   68418 buildroot.go:166] provisioning hostname "default-k8s-diff-port-878552"
	I1001 20:29:18.382063   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetMachineName
	I1001 20:29:18.382291   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:18.384714   68418 machine.go:96] duration metric: took 4m37.419094583s to provisionDockerMachine
	I1001 20:29:18.384772   68418 fix.go:56] duration metric: took 4m37.442164125s for fixHost
	I1001 20:29:18.384782   68418 start.go:83] releasing machines lock for "default-k8s-diff-port-878552", held for 4m37.442187455s
	W1001 20:29:18.384813   68418 start.go:714] error starting host: provision: host is not running
	W1001 20:29:18.384993   68418 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1001 20:29:18.385017   68418 start.go:729] Will try again in 5 seconds ...
	I1001 20:29:23.387086   68418 start.go:360] acquireMachinesLock for default-k8s-diff-port-878552: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 20:29:23.387232   68418 start.go:364] duration metric: took 101.596µs to acquireMachinesLock for "default-k8s-diff-port-878552"
	I1001 20:29:23.387273   68418 start.go:96] Skipping create...Using existing machine configuration
	I1001 20:29:23.387284   68418 fix.go:54] fixHost starting: 
	I1001 20:29:23.387645   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:29:23.387669   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:29:23.403371   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39149
	I1001 20:29:23.404008   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:29:23.404580   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:29:23.404603   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:29:23.405181   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:29:23.405410   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:23.405560   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:29:23.407563   68418 fix.go:112] recreateIfNeeded on default-k8s-diff-port-878552: state=Stopped err=<nil>
	I1001 20:29:23.407589   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	W1001 20:29:23.407771   68418 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 20:29:23.409721   68418 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-878552" ...
	I1001 20:29:23.410973   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Start
	I1001 20:29:23.411207   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Ensuring networks are active...
	I1001 20:29:23.412117   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Ensuring network default is active
	I1001 20:29:23.412576   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Ensuring network mk-default-k8s-diff-port-878552 is active
	I1001 20:29:23.412956   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Getting domain xml...
	I1001 20:29:23.413589   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Creating domain...
	I1001 20:29:24.744972   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting to get IP...
	I1001 20:29:24.746001   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:24.746641   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:24.746710   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:24.746607   69521 retry.go:31] will retry after 260.966833ms: waiting for machine to come up
	I1001 20:29:25.009284   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.009825   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.009849   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:25.009778   69521 retry.go:31] will retry after 308.10041ms: waiting for machine to come up
	I1001 20:29:25.319153   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.319717   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.319752   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:25.319652   69521 retry.go:31] will retry after 342.802984ms: waiting for machine to come up
	I1001 20:29:25.664405   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.664893   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.664920   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:25.664816   69521 retry.go:31] will retry after 397.002924ms: waiting for machine to come up
	I1001 20:29:26.063628   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:26.064235   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:26.064259   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:26.064201   69521 retry.go:31] will retry after 526.648832ms: waiting for machine to come up
	I1001 20:29:26.592834   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:26.593284   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:26.593307   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:26.593226   69521 retry.go:31] will retry after 642.569388ms: waiting for machine to come up
	I1001 20:29:27.237224   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:27.237775   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:27.237808   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:27.237714   69521 retry.go:31] will retry after 963.05932ms: waiting for machine to come up
	I1001 20:29:28.202841   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:28.203333   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:28.203363   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:28.203287   69521 retry.go:31] will retry after 1.372004234s: waiting for machine to come up
	I1001 20:29:29.577175   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:29.577678   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:29.577706   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:29.577627   69521 retry.go:31] will retry after 1.693508507s: waiting for machine to come up
	I1001 20:29:31.273758   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:31.274247   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:31.274274   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:31.274201   69521 retry.go:31] will retry after 1.793304779s: waiting for machine to come up
	I1001 20:29:33.069467   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:33.069894   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:33.069915   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:33.069861   69521 retry.go:31] will retry after 2.825253867s: waiting for machine to come up
	I1001 20:29:40.678676   65592 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1001 20:29:40.678797   65592 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1001 20:29:40.680563   65592 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1001 20:29:40.680613   65592 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:29:40.680680   65592 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:29:40.680788   65592 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:29:40.680868   65592 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 20:29:40.681030   65592 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:29:40.683042   65592 out.go:235]   - Generating certificates and keys ...
	I1001 20:29:40.683149   65592 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:29:40.683245   65592 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:29:40.683353   65592 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 20:29:40.683435   65592 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 20:29:40.683545   65592 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 20:29:40.683605   65592 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 20:29:40.683665   65592 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 20:29:40.683723   65592 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 20:29:40.683793   65592 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 20:29:40.683878   65592 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 20:29:40.683956   65592 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 20:29:40.684054   65592 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:29:40.684127   65592 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:29:40.684212   65592 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:29:40.684303   65592 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:29:40.684414   65592 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:29:40.684551   65592 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:29:40.684661   65592 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:29:40.684724   65592 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:29:40.684827   65592 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:29:35.897417   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:35.897916   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:35.897949   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:35.897862   69521 retry.go:31] will retry after 3.519866937s: waiting for machine to come up
	I1001 20:29:39.419142   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:39.419528   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:39.419554   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:39.419494   69521 retry.go:31] will retry after 3.507101438s: waiting for machine to come up
	I1001 20:29:40.686427   65592 out.go:235]   - Booting up control plane ...
	I1001 20:29:40.686534   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:29:40.686621   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:29:40.686710   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:29:40.686820   65592 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:29:40.686996   65592 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 20:29:40.687063   65592 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1001 20:29:40.687127   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.687336   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.687443   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.687674   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.687759   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.687958   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.688047   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.688212   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.688274   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.688510   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.688519   65592 kubeadm.go:310] 
	I1001 20:29:40.688566   65592 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1001 20:29:40.688610   65592 kubeadm.go:310] 		timed out waiting for the condition
	I1001 20:29:40.688617   65592 kubeadm.go:310] 
	I1001 20:29:40.688646   65592 kubeadm.go:310] 	This error is likely caused by:
	I1001 20:29:40.688680   65592 kubeadm.go:310] 		- The kubelet is not running
	I1001 20:29:40.688770   65592 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1001 20:29:40.688778   65592 kubeadm.go:310] 
	I1001 20:29:40.688882   65592 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1001 20:29:40.688937   65592 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1001 20:29:40.688986   65592 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1001 20:29:40.688996   65592 kubeadm.go:310] 
	I1001 20:29:40.689114   65592 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1001 20:29:40.689222   65592 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1001 20:29:40.689237   65592 kubeadm.go:310] 
	I1001 20:29:40.689376   65592 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1001 20:29:40.689517   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1001 20:29:40.689638   65592 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1001 20:29:40.689709   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1001 20:29:40.689786   65592 kubeadm.go:310] 
	I1001 20:29:40.689796   65592 kubeadm.go:394] duration metric: took 7m56.416911577s to StartCluster
	I1001 20:29:40.689838   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:29:40.689896   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:29:40.733027   65592 cri.go:89] found id: ""
	I1001 20:29:40.733059   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.733068   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:29:40.733073   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:29:40.733120   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:29:40.767975   65592 cri.go:89] found id: ""
	I1001 20:29:40.768010   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.768021   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:29:40.768029   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:29:40.768095   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:29:40.802624   65592 cri.go:89] found id: ""
	I1001 20:29:40.802657   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.802668   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:29:40.802676   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:29:40.802748   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:29:40.838109   65592 cri.go:89] found id: ""
	I1001 20:29:40.838142   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.838151   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:29:40.838157   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:29:40.838204   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:29:40.873083   65592 cri.go:89] found id: ""
	I1001 20:29:40.873112   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.873124   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:29:40.873131   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:29:40.873192   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:29:40.907675   65592 cri.go:89] found id: ""
	I1001 20:29:40.907705   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.907714   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:29:40.907720   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:29:40.907775   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:29:40.941641   65592 cri.go:89] found id: ""
	I1001 20:29:40.941669   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.941678   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:29:40.941691   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:29:40.941748   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:29:40.978189   65592 cri.go:89] found id: ""
	I1001 20:29:40.978216   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.978227   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:29:40.978238   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:29:40.978254   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:29:41.053798   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:29:41.053823   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:29:41.053835   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:29:41.160669   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:29:41.160715   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:29:41.218152   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:29:41.218182   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:29:41.274784   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:29:41.274821   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1001 20:29:41.288554   65592 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1001 20:29:41.288613   65592 out.go:270] * 
	W1001 20:29:41.288663   65592 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1001 20:29:41.288674   65592 out.go:270] * 
	W1001 20:29:41.289525   65592 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 20:29:41.292969   65592 out.go:201] 
	W1001 20:29:41.294238   65592 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1001 20:29:41.294278   65592 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1001 20:29:41.294297   65592 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1001 20:29:41.295783   65592 out.go:201] 
	I1001 20:29:42.929490   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:42.930036   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has current primary IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:42.930058   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Found IP for machine: 192.168.50.4
	I1001 20:29:42.930091   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Reserving static IP address...
	I1001 20:29:42.930623   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-878552", mac: "52:54:00:72:13:05", ip: "192.168.50.4"} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:42.930660   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | skip adding static IP to network mk-default-k8s-diff-port-878552 - found existing host DHCP lease matching {name: "default-k8s-diff-port-878552", mac: "52:54:00:72:13:05", ip: "192.168.50.4"}
	I1001 20:29:42.930686   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Reserved static IP address: 192.168.50.4
	I1001 20:29:42.930703   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for SSH to be available...
	I1001 20:29:42.930719   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Getting to WaitForSSH function...
	I1001 20:29:42.933472   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:42.933911   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:42.933948   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:42.934106   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Using SSH client type: external
	I1001 20:29:42.934134   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa (-rw-------)
	I1001 20:29:42.934168   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.4 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 20:29:42.934190   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | About to run SSH command:
	I1001 20:29:42.934210   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | exit 0
	I1001 20:29:43.064425   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | SSH cmd err, output: <nil>: 
	I1001 20:29:43.064821   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetConfigRaw
	I1001 20:29:43.065476   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetIP
	I1001 20:29:43.068442   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.068951   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.068982   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.069236   68418 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/config.json ...
	I1001 20:29:43.069476   68418 machine.go:93] provisionDockerMachine start ...
	I1001 20:29:43.069498   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:43.069726   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.072374   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.072720   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.072754   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.072974   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:43.073170   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.073358   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.073501   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:43.073685   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:29:43.073919   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:29:43.073946   68418 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 20:29:43.188588   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1001 20:29:43.188626   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetMachineName
	I1001 20:29:43.188887   68418 buildroot.go:166] provisioning hostname "default-k8s-diff-port-878552"
	I1001 20:29:43.188948   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetMachineName
	I1001 20:29:43.189182   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.192158   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.192550   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.192575   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.192743   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:43.192918   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.193081   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.193193   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:43.193317   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:29:43.193466   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:29:43.193478   68418 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-878552 && echo "default-k8s-diff-port-878552" | sudo tee /etc/hostname
	I1001 20:29:43.318342   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-878552
	
	I1001 20:29:43.318381   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.321205   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.321777   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.321807   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.322031   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:43.322218   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.322360   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.322515   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:43.322729   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:29:43.322907   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:29:43.322925   68418 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-878552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-878552/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-878552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 20:29:43.440839   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:29:43.440884   68418 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 20:29:43.440949   68418 buildroot.go:174] setting up certificates
	I1001 20:29:43.440966   68418 provision.go:84] configureAuth start
	I1001 20:29:43.440982   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetMachineName
	I1001 20:29:43.441238   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetIP
	I1001 20:29:43.443849   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.444223   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.444257   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.444432   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.446569   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.447004   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.447032   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.447130   68418 provision.go:143] copyHostCerts
	I1001 20:29:43.447210   68418 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 20:29:43.447224   68418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 20:29:43.447317   68418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 20:29:43.447430   68418 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 20:29:43.447442   68418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 20:29:43.447484   68418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 20:29:43.447560   68418 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 20:29:43.447570   68418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 20:29:43.447602   68418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 20:29:43.447729   68418 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-878552 san=[127.0.0.1 192.168.50.4 default-k8s-diff-port-878552 localhost minikube]
	I1001 20:29:43.597134   68418 provision.go:177] copyRemoteCerts
	I1001 20:29:43.597195   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 20:29:43.597216   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.599988   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.600379   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.600414   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.600598   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:43.600799   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.600970   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:43.601115   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:29:43.687211   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 20:29:43.714280   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1001 20:29:43.738536   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 20:29:43.764130   68418 provision.go:87] duration metric: took 323.147928ms to configureAuth
	I1001 20:29:43.764163   68418 buildroot.go:189] setting minikube options for container-runtime
	I1001 20:29:43.764353   68418 config.go:182] Loaded profile config "default-k8s-diff-port-878552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:29:43.764462   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.767588   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.767962   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.767991   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.768181   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:43.768339   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.768525   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.768665   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:43.768833   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:29:43.768994   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:29:43.769013   68418 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 20:29:43.998941   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 20:29:43.998964   68418 machine.go:96] duration metric: took 929.475626ms to provisionDockerMachine
	I1001 20:29:43.998976   68418 start.go:293] postStartSetup for "default-k8s-diff-port-878552" (driver="kvm2")
	I1001 20:29:43.998989   68418 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 20:29:43.999008   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:43.999305   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 20:29:43.999332   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:44.001854   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.002381   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:44.002433   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.002555   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:44.002787   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:44.002967   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:44.003142   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:29:44.091378   68418 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 20:29:44.096207   68418 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 20:29:44.096235   68418 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 20:29:44.096315   68418 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 20:29:44.096424   68418 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 20:29:44.096531   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 20:29:44.106232   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:29:44.130524   68418 start.go:296] duration metric: took 131.532724ms for postStartSetup
	I1001 20:29:44.130564   68418 fix.go:56] duration metric: took 20.743280839s for fixHost
	I1001 20:29:44.130589   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:44.133873   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.134285   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:44.134309   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.134509   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:44.134719   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:44.134873   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:44.135025   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:44.135172   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:29:44.135362   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:29:44.135376   68418 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 20:29:44.249136   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727814584.207146331
	
	I1001 20:29:44.249160   68418 fix.go:216] guest clock: 1727814584.207146331
	I1001 20:29:44.249189   68418 fix.go:229] Guest: 2024-10-01 20:29:44.207146331 +0000 UTC Remote: 2024-10-01 20:29:44.13056925 +0000 UTC m=+303.335525185 (delta=76.577081ms)
	I1001 20:29:44.249215   68418 fix.go:200] guest clock delta is within tolerance: 76.577081ms
	I1001 20:29:44.249220   68418 start.go:83] releasing machines lock for "default-k8s-diff-port-878552", held for 20.861972701s
	I1001 20:29:44.249238   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:44.249527   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetIP
	I1001 20:29:44.252984   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.253526   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:44.253569   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.253903   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:44.254449   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:44.254618   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:44.254680   68418 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 20:29:44.254727   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:44.254810   68418 ssh_runner.go:195] Run: cat /version.json
	I1001 20:29:44.254833   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:44.257550   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.257826   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.258077   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:44.258114   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.258363   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:44.258489   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:44.258529   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.258563   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:44.258683   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:44.258784   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:44.258832   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:44.258915   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:29:44.258965   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:44.259113   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:29:44.379049   68418 ssh_runner.go:195] Run: systemctl --version
	I1001 20:29:44.384985   68418 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 20:29:44.527579   68418 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 20:29:44.533267   68418 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 20:29:44.533357   68418 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 20:29:44.552308   68418 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 20:29:44.552333   68418 start.go:495] detecting cgroup driver to use...
	I1001 20:29:44.552421   68418 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 20:29:44.573762   68418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 20:29:44.588010   68418 docker.go:217] disabling cri-docker service (if available) ...
	I1001 20:29:44.588063   68418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 20:29:44.602369   68418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 20:29:44.618754   68418 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 20:29:44.757380   68418 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 20:29:44.941718   68418 docker.go:233] disabling docker service ...
	I1001 20:29:44.941790   68418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 20:29:44.957306   68418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 20:29:44.971723   68418 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 20:29:45.094124   68418 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 20:29:45.220645   68418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 20:29:45.236217   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 20:29:45.255752   68418 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 20:29:45.255820   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.266327   68418 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 20:29:45.266398   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.276964   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.288013   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.298669   68418 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 20:29:45.309693   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.320041   68418 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.336621   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.346862   68418 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 20:29:45.357656   68418 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 20:29:45.357717   68418 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 20:29:45.372693   68418 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 20:29:45.383796   68418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:29:45.524957   68418 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 20:29:45.611630   68418 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 20:29:45.611702   68418 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 20:29:45.616520   68418 start.go:563] Will wait 60s for crictl version
	I1001 20:29:45.616587   68418 ssh_runner.go:195] Run: which crictl
	I1001 20:29:45.620321   68418 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 20:29:45.661806   68418 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 20:29:45.661890   68418 ssh_runner.go:195] Run: crio --version
	I1001 20:29:45.690843   68418 ssh_runner.go:195] Run: crio --version
	I1001 20:29:45.720183   68418 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 20:29:45.721659   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetIP
	I1001 20:29:45.724986   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:45.725349   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:45.725376   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:45.725583   68418 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1001 20:29:45.729522   68418 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:29:45.741877   68418 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-878552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:default-k8s-diff-port-878552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.4 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 20:29:45.742008   68418 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 20:29:45.742051   68418 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:29:45.779002   68418 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1001 20:29:45.779081   68418 ssh_runner.go:195] Run: which lz4
	I1001 20:29:45.782751   68418 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 20:29:45.786704   68418 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 20:29:45.786733   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1001 20:29:47.072431   68418 crio.go:462] duration metric: took 1.289701438s to copy over tarball
	I1001 20:29:47.072508   68418 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 20:29:49.166576   68418 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.094040254s)
	I1001 20:29:49.166604   68418 crio.go:469] duration metric: took 2.094143226s to extract the tarball
	I1001 20:29:49.166613   68418 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 20:29:49.203988   68418 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:29:49.250464   68418 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 20:29:49.250490   68418 cache_images.go:84] Images are preloaded, skipping loading
	I1001 20:29:49.250499   68418 kubeadm.go:934] updating node { 192.168.50.4 8444 v1.31.1 crio true true} ...
	I1001 20:29:49.250612   68418 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-878552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-878552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 20:29:49.250697   68418 ssh_runner.go:195] Run: crio config
	I1001 20:29:49.298003   68418 cni.go:84] Creating CNI manager for ""
	I1001 20:29:49.298024   68418 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:29:49.298032   68418 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 20:29:49.298055   68418 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.4 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-878552 NodeName:default-k8s-diff-port-878552 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 20:29:49.298183   68418 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.4
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-878552"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 20:29:49.298253   68418 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 20:29:49.308945   68418 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 20:29:49.309011   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 20:29:49.319017   68418 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1001 20:29:49.335588   68418 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 20:29:49.351598   68418 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I1001 20:29:49.369172   68418 ssh_runner.go:195] Run: grep 192.168.50.4	control-plane.minikube.internal$ /etc/hosts
	I1001 20:29:49.372755   68418 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.4	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:29:49.385529   68418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:29:49.509676   68418 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:29:49.527149   68418 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552 for IP: 192.168.50.4
	I1001 20:29:49.527170   68418 certs.go:194] generating shared ca certs ...
	I1001 20:29:49.527185   68418 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:29:49.527321   68418 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 20:29:49.527368   68418 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 20:29:49.527378   68418 certs.go:256] generating profile certs ...
	I1001 20:29:49.527456   68418 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/client.key
	I1001 20:29:49.527514   68418 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/apiserver.key.7bbee9b6
	I1001 20:29:49.527555   68418 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/proxy-client.key
	I1001 20:29:49.527668   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 20:29:49.527707   68418 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 20:29:49.527735   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 20:29:49.527772   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 20:29:49.527811   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 20:29:49.527848   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 20:29:49.527907   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:29:49.529210   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 20:29:49.577743   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 20:29:49.617960   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 20:29:49.659543   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 20:29:49.709464   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1001 20:29:49.734308   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 20:29:49.759576   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 20:29:49.784416   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 20:29:49.809150   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 20:29:49.833580   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 20:29:49.857628   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 20:29:49.880924   68418 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 20:29:49.897478   68418 ssh_runner.go:195] Run: openssl version
	I1001 20:29:49.903488   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 20:29:49.914490   68418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:29:49.919105   68418 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:29:49.919165   68418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:29:49.925133   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 20:29:49.936294   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 20:29:49.946630   68418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 20:29:49.951255   68418 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 20:29:49.951308   68418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 20:29:49.957277   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 20:29:49.971166   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 20:29:49.982558   68418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 20:29:49.986947   68418 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 20:29:49.987003   68418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 20:29:49.992569   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 20:29:50.002922   68418 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 20:29:50.007707   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 20:29:50.013717   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 20:29:50.020166   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 20:29:50.026795   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 20:29:50.033544   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 20:29:50.039686   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 20:29:50.045837   68418 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-878552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:default-k8s-diff-port-878552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.4 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:29:50.045971   68418 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 20:29:50.046025   68418 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 20:29:50.086925   68418 cri.go:89] found id: ""
	I1001 20:29:50.086999   68418 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 20:29:50.097130   68418 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1001 20:29:50.097167   68418 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1001 20:29:50.097222   68418 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1001 20:29:50.108298   68418 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1001 20:29:50.109389   68418 kubeconfig.go:125] found "default-k8s-diff-port-878552" server: "https://192.168.50.4:8444"
	I1001 20:29:50.111587   68418 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1001 20:29:50.122158   68418 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.4
	I1001 20:29:50.122199   68418 kubeadm.go:1160] stopping kube-system containers ...
	I1001 20:29:50.122213   68418 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1001 20:29:50.122281   68418 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 20:29:50.160351   68418 cri.go:89] found id: ""
	I1001 20:29:50.160434   68418 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1001 20:29:50.178857   68418 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:29:50.190857   68418 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:29:50.190879   68418 kubeadm.go:157] found existing configuration files:
	
	I1001 20:29:50.190926   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1001 20:29:50.200391   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:29:50.200449   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:29:50.210388   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1001 20:29:50.219943   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:29:50.220007   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:29:50.229576   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1001 20:29:50.239983   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:29:50.240055   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:29:50.251062   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1001 20:29:50.261349   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:29:50.261430   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:29:50.271284   68418 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:29:50.281256   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:50.393255   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:51.469349   68418 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.076029092s)
	I1001 20:29:51.469386   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:51.683522   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:51.749545   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:51.856549   68418 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:29:51.856662   68418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:29:52.356980   68418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:29:52.857568   68418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:29:53.357123   68418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:29:53.372308   68418 api_server.go:72] duration metric: took 1.515757915s to wait for apiserver process to appear ...
	I1001 20:29:53.372341   68418 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:29:53.372387   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:53.372877   68418 api_server.go:269] stopped: https://192.168.50.4:8444/healthz: Get "https://192.168.50.4:8444/healthz": dial tcp 192.168.50.4:8444: connect: connection refused
	I1001 20:29:53.872447   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:56.591087   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1001 20:29:56.591111   68418 api_server.go:103] status: https://192.168.50.4:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1001 20:29:56.591122   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:56.668641   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1001 20:29:56.668672   68418 api_server.go:103] status: https://192.168.50.4:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1001 20:29:56.872906   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:56.882393   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 20:29:56.882433   68418 api_server.go:103] status: https://192.168.50.4:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 20:29:57.372590   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:57.377715   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 20:29:57.377745   68418 api_server.go:103] status: https://192.168.50.4:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 20:29:57.873466   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:57.879628   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 200:
	ok
	I1001 20:29:57.889478   68418 api_server.go:141] control plane version: v1.31.1
	I1001 20:29:57.889512   68418 api_server.go:131] duration metric: took 4.517163838s to wait for apiserver health ...
	I1001 20:29:57.889520   68418 cni.go:84] Creating CNI manager for ""
	I1001 20:29:57.889534   68418 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:29:57.891485   68418 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 20:29:57.892936   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 20:29:57.910485   68418 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 20:29:57.930071   68418 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:29:57.940155   68418 system_pods.go:59] 8 kube-system pods found
	I1001 20:29:57.940191   68418 system_pods.go:61] "coredns-7c65d6cfc9-cmchv" [55a0612c-d596-4799-a9f9-0b6d9361ca15] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 20:29:57.940202   68418 system_pods.go:61] "etcd-default-k8s-diff-port-878552" [bcd7c228-d83d-4eec-9a64-f33dac086dcd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1001 20:29:57.940211   68418 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-878552" [23602015-b245-4e14-a076-2e9efb0f2f66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 20:29:57.940232   68418 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-878552" [e94298d4-75e3-4fbb-b361-6e5248273355] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1001 20:29:57.940239   68418 system_pods.go:61] "kube-proxy-sxxfj" [2bd75205-221e-498e-8a80-1e7a727fd4e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1001 20:29:57.940246   68418 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-878552" [ddcacd2c-3781-46df-83f8-e6763485a55d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 20:29:57.940254   68418 system_pods.go:61] "metrics-server-6867b74b74-b62f8" [26359941-b4d3-442c-ae46-4303a2f7b5e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:29:57.940262   68418 system_pods.go:61] "storage-provisioner" [a34592b0-f9e5-465b-9d64-07cf84f0c951] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1001 20:29:57.940279   68418 system_pods.go:74] duration metric: took 10.189531ms to wait for pod list to return data ...
	I1001 20:29:57.940292   68418 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:29:57.945316   68418 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:29:57.945349   68418 node_conditions.go:123] node cpu capacity is 2
	I1001 20:29:57.945362   68418 node_conditions.go:105] duration metric: took 5.063896ms to run NodePressure ...
	I1001 20:29:57.945380   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:58.233781   68418 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1001 20:29:58.237692   68418 kubeadm.go:739] kubelet initialised
	I1001 20:29:58.237713   68418 kubeadm.go:740] duration metric: took 3.903724ms waiting for restarted kubelet to initialise ...
	I1001 20:29:58.237721   68418 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:29:58.243500   68418 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:00.249577   68418 pod_ready.go:103] pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:02.250329   68418 pod_ready.go:103] pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:04.750635   68418 pod_ready.go:103] pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:06.751559   68418 pod_ready.go:93] pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:06.751583   68418 pod_ready.go:82] duration metric: took 8.508053751s for pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:06.751594   68418 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:08.757727   68418 pod_ready.go:103] pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:10.260326   68418 pod_ready.go:93] pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:10.260352   68418 pod_ready.go:82] duration metric: took 3.508751351s for pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.260388   68418 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.267041   68418 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:10.267071   68418 pod_ready.go:82] duration metric: took 6.67429ms for pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.267083   68418 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.773135   68418 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:10.773156   68418 pod_ready.go:82] duration metric: took 506.065053ms for pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.773166   68418 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-sxxfj" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.777890   68418 pod_ready.go:93] pod "kube-proxy-sxxfj" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:10.777910   68418 pod_ready.go:82] duration metric: took 4.738315ms for pod "kube-proxy-sxxfj" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.777918   68418 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.782610   68418 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:10.782634   68418 pod_ready.go:82] duration metric: took 4.708989ms for pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.782644   68418 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:12.789050   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:15.290635   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:17.290867   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:19.789502   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:21.789999   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:24.289487   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:26.789083   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:28.789955   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:30.790439   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:33.289188   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:35.289313   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:37.289903   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:39.788459   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:41.788633   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:43.788867   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:46.290002   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:48.789891   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:51.289334   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:53.788643   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:55.789983   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:58.288949   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:00.289478   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:02.290789   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:04.789722   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:07.289474   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:09.290183   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:11.790355   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:14.289284   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:16.289536   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:18.289606   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:20.789261   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:22.789463   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:25.290185   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:27.788643   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:29.788778   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:31.790285   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:34.288230   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:36.288784   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:38.289862   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:40.789317   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:43.289232   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:45.290400   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:47.788723   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:49.789327   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:52.289114   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:54.788895   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:56.788984   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:59.288473   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:01.789415   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:04.289328   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:06.289615   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:08.788879   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:10.790191   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:13.288885   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:15.789008   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:17.789191   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:19.789559   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:22.288958   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:24.290206   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:26.788241   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:28.789457   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:31.288929   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:33.789418   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:35.789932   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:38.288742   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:40.289667   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:42.789129   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:44.790115   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:47.289310   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:49.289558   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:51.789255   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:54.289586   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:56.788032   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:58.789012   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:01.289206   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:03.788129   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:05.788915   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:07.790124   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:10.289206   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:12.789314   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:14.789636   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:17.288443   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:19.289524   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:21.289650   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:23.789802   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:26.289735   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:28.788897   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:30.789339   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:33.289295   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:35.289664   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:37.789968   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:40.289657   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:42.789430   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:45.289320   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:47.789980   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:50.287836   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:52.289028   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:54.788936   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:56.789521   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:59.289778   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:01.788398   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:03.789045   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:05.789391   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:08.289322   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:10.783748   68418 pod_ready.go:82] duration metric: took 4m0.001085136s for pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace to be "Ready" ...
	E1001 20:34:10.783784   68418 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace to be "Ready" (will not retry!)
	I1001 20:34:10.783805   68418 pod_ready.go:39] duration metric: took 4m12.546072786s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:34:10.783831   68418 kubeadm.go:597] duration metric: took 4m20.686657254s to restartPrimaryControlPlane
	W1001 20:34:10.783895   68418 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1001 20:34:10.783926   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 20:34:36.981542   68418 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.197594945s)
	I1001 20:34:36.981628   68418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:34:37.005650   68418 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:34:37.017406   68418 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:34:37.031711   68418 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:34:37.031737   68418 kubeadm.go:157] found existing configuration files:
	
	I1001 20:34:37.031801   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1001 20:34:37.054028   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:34:37.054096   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:34:37.068277   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1001 20:34:37.099472   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:34:37.099558   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:34:37.109813   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1001 20:34:37.119548   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:34:37.119620   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:34:37.129522   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1001 20:34:37.138911   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:34:37.138971   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:34:37.149119   68418 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:34:37.193177   68418 kubeadm.go:310] W1001 20:34:37.161028    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:34:37.193935   68418 kubeadm.go:310] W1001 20:34:37.161888    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:34:37.305111   68418 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:34:45.582383   68418 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 20:34:45.582463   68418 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:34:45.582540   68418 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:34:45.582643   68418 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:34:45.582725   68418 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 20:34:45.582825   68418 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:34:45.584304   68418 out.go:235]   - Generating certificates and keys ...
	I1001 20:34:45.584409   68418 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:34:45.584488   68418 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:34:45.584584   68418 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 20:34:45.584666   68418 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 20:34:45.584757   68418 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 20:34:45.584833   68418 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 20:34:45.584926   68418 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 20:34:45.585014   68418 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 20:34:45.585109   68418 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 20:34:45.585227   68418 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 20:34:45.585291   68418 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 20:34:45.585364   68418 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:34:45.585438   68418 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:34:45.585527   68418 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 20:34:45.585609   68418 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:34:45.585710   68418 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:34:45.585792   68418 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:34:45.585901   68418 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:34:45.585990   68418 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:34:45.587360   68418 out.go:235]   - Booting up control plane ...
	I1001 20:34:45.587448   68418 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:34:45.587539   68418 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:34:45.587626   68418 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:34:45.587751   68418 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:34:45.587885   68418 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:34:45.587960   68418 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:34:45.588118   68418 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 20:34:45.588256   68418 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 20:34:45.588341   68418 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002411615s
	I1001 20:34:45.588453   68418 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 20:34:45.588531   68418 kubeadm.go:310] [api-check] The API server is healthy after 5.002438287s
	I1001 20:34:45.588653   68418 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 20:34:45.588821   68418 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 20:34:45.588925   68418 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 20:34:45.589184   68418 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-878552 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 20:34:45.589272   68418 kubeadm.go:310] [bootstrap-token] Using token: p1d60n.4sgx895mi22cjpsf
	I1001 20:34:45.590444   68418 out.go:235]   - Configuring RBAC rules ...
	I1001 20:34:45.590599   68418 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 20:34:45.590726   68418 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 20:34:45.590923   68418 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 20:34:45.591071   68418 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 20:34:45.591222   68418 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 20:34:45.591301   68418 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 20:34:45.591402   68418 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 20:34:45.591441   68418 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 20:34:45.591485   68418 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 20:34:45.591492   68418 kubeadm.go:310] 
	I1001 20:34:45.591540   68418 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 20:34:45.591548   68418 kubeadm.go:310] 
	I1001 20:34:45.591614   68418 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 20:34:45.591619   68418 kubeadm.go:310] 
	I1001 20:34:45.591644   68418 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 20:34:45.591694   68418 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 20:34:45.591750   68418 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 20:34:45.591756   68418 kubeadm.go:310] 
	I1001 20:34:45.591812   68418 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 20:34:45.591818   68418 kubeadm.go:310] 
	I1001 20:34:45.591857   68418 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 20:34:45.591865   68418 kubeadm.go:310] 
	I1001 20:34:45.591909   68418 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 20:34:45.591990   68418 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 20:34:45.592063   68418 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 20:34:45.592071   68418 kubeadm.go:310] 
	I1001 20:34:45.592195   68418 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 20:34:45.592313   68418 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 20:34:45.592322   68418 kubeadm.go:310] 
	I1001 20:34:45.592432   68418 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token p1d60n.4sgx895mi22cjpsf \
	I1001 20:34:45.592579   68418 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 \
	I1001 20:34:45.592611   68418 kubeadm.go:310] 	--control-plane 
	I1001 20:34:45.592620   68418 kubeadm.go:310] 
	I1001 20:34:45.592734   68418 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 20:34:45.592743   68418 kubeadm.go:310] 
	I1001 20:34:45.592858   68418 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token p1d60n.4sgx895mi22cjpsf \
	I1001 20:34:45.592982   68418 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 
	I1001 20:34:45.592997   68418 cni.go:84] Creating CNI manager for ""
	I1001 20:34:45.593009   68418 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:34:45.594419   68418 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 20:34:45.595548   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 20:34:45.607351   68418 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 20:34:45.627315   68418 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 20:34:45.627399   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:45.627424   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-878552 minikube.k8s.io/updated_at=2024_10_01T20_34_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=default-k8s-diff-port-878552 minikube.k8s.io/primary=true
	I1001 20:34:45.843925   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:45.843977   68418 ops.go:34] apiserver oom_adj: -16
	I1001 20:34:46.344009   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:46.844786   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:47.344138   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:47.844582   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:48.344478   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:48.844802   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:49.344790   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:49.844113   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:49.980078   68418 kubeadm.go:1113] duration metric: took 4.352743528s to wait for elevateKubeSystemPrivileges
	I1001 20:34:49.980127   68418 kubeadm.go:394] duration metric: took 4m59.934297539s to StartCluster
	I1001 20:34:49.980151   68418 settings.go:142] acquiring lock: {Name:mkeb3fe64ed992373f048aef24eb0d675bcab60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:34:49.980237   68418 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:34:49.982156   68418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/kubeconfig: {Name:mk7ef19d546928001abf2478a3abe1d17765a591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:34:49.982450   68418 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.4 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 20:34:49.982531   68418 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 20:34:49.982651   68418 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-878552"
	I1001 20:34:49.982674   68418 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-878552"
	I1001 20:34:49.982673   68418 config.go:182] Loaded profile config "default-k8s-diff-port-878552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W1001 20:34:49.982682   68418 addons.go:243] addon storage-provisioner should already be in state true
	I1001 20:34:49.982722   68418 host.go:66] Checking if "default-k8s-diff-port-878552" exists ...
	I1001 20:34:49.982727   68418 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-878552"
	I1001 20:34:49.982743   68418 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-878552"
	I1001 20:34:49.982817   68418 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-878552"
	I1001 20:34:49.982861   68418 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-878552"
	W1001 20:34:49.982871   68418 addons.go:243] addon metrics-server should already be in state true
	I1001 20:34:49.982899   68418 host.go:66] Checking if "default-k8s-diff-port-878552" exists ...
	I1001 20:34:49.983158   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:49.983157   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:49.983202   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:49.983222   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:49.983301   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:49.983360   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:49.983825   68418 out.go:177] * Verifying Kubernetes components...
	I1001 20:34:49.985618   68418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:34:50.000925   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33641
	I1001 20:34:50.001031   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40311
	I1001 20:34:50.001469   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.001518   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.002031   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.002046   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.002084   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.002096   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.002510   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.002698   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.003148   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:50.003188   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:50.003432   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33083
	I1001 20:34:50.003813   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:50.003845   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.003858   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:50.004438   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.004462   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.004823   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.005017   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:34:50.009397   68418 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-878552"
	W1001 20:34:50.009420   68418 addons.go:243] addon default-storageclass should already be in state true
	I1001 20:34:50.009449   68418 host.go:66] Checking if "default-k8s-diff-port-878552" exists ...
	I1001 20:34:50.009886   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:50.009937   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:50.025234   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42543
	I1001 20:34:50.025892   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.026556   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.026583   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.027217   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.027484   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:34:50.029351   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42743
	I1001 20:34:50.029576   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:34:50.029996   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.030498   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.030520   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.030634   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45211
	I1001 20:34:50.030843   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.031078   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:34:50.031171   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.031283   68418 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:34:50.031683   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.031706   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.032061   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.032524   68418 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:34:50.032542   68418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 20:34:50.032560   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:34:50.032650   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:50.032683   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:50.033489   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:34:50.034928   68418 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1001 20:34:50.036629   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.036714   68418 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1001 20:34:50.036728   68418 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1001 20:34:50.036757   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:34:50.037000   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:34:50.037020   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.037303   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:34:50.037502   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:34:50.037697   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:34:50.037858   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:34:50.040023   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.040406   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:34:50.040428   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.040637   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:34:50.040843   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:34:50.041031   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:34:50.041156   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:34:50.050069   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34785
	I1001 20:34:50.050601   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.051079   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.051098   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.051460   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.051601   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:34:50.054072   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:34:50.054308   68418 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 20:34:50.054324   68418 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 20:34:50.054344   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:34:50.057697   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.058329   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:34:50.058386   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.058519   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:34:50.058781   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:34:50.059047   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:34:50.059192   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:34:50.228332   68418 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:34:50.245991   68418 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-878552" to be "Ready" ...
	I1001 20:34:50.255784   68418 node_ready.go:49] node "default-k8s-diff-port-878552" has status "Ready":"True"
	I1001 20:34:50.255822   68418 node_ready.go:38] duration metric: took 9.789404ms for node "default-k8s-diff-port-878552" to be "Ready" ...
	I1001 20:34:50.255836   68418 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:34:50.262258   68418 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8xth8" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:50.409170   68418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 20:34:50.412846   68418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:34:50.423375   68418 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1001 20:34:50.423404   68418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1001 20:34:50.476160   68418 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1001 20:34:50.476192   68418 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1001 20:34:50.510810   68418 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 20:34:50.510840   68418 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1001 20:34:50.570025   68418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 20:34:50.783367   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:50.783390   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:50.783748   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Closing plugin on server side
	I1001 20:34:50.783761   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:50.783773   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:50.783786   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:50.783794   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:50.783980   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:50.783993   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:50.783999   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Closing plugin on server side
	I1001 20:34:50.795782   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:50.795802   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:50.796093   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:50.796114   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:51.424974   68418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.012087585s)
	I1001 20:34:51.425090   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:51.425107   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:51.425376   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:51.425413   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:51.425426   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:51.425440   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:51.425671   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Closing plugin on server side
	I1001 20:34:51.425723   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:51.425743   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:51.713898   68418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.143834875s)
	I1001 20:34:51.713954   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:51.713969   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:51.714336   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:51.714375   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:51.714380   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Closing plugin on server side
	I1001 20:34:51.714385   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:51.714487   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:51.714762   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:51.714779   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:51.714798   68418 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-878552"
	I1001 20:34:51.716414   68418 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1001 20:34:51.717866   68418 addons.go:510] duration metric: took 1.735348103s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1001 20:34:52.268955   68418 pod_ready.go:103] pod "coredns-7c65d6cfc9-8xth8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:54.769610   68418 pod_ready.go:93] pod "coredns-7c65d6cfc9-8xth8" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:54.769633   68418 pod_ready.go:82] duration metric: took 4.507339793s for pod "coredns-7c65d6cfc9-8xth8" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:54.769642   68418 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p7wbg" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:56.775610   68418 pod_ready.go:103] pod "coredns-7c65d6cfc9-p7wbg" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:57.777422   68418 pod_ready.go:93] pod "coredns-7c65d6cfc9-p7wbg" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:57.777445   68418 pod_ready.go:82] duration metric: took 3.007796462s for pod "coredns-7c65d6cfc9-p7wbg" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.777455   68418 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.783103   68418 pod_ready.go:93] pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:57.783124   68418 pod_ready.go:82] duration metric: took 5.664052ms for pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.783135   68418 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.788028   68418 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:57.788052   68418 pod_ready.go:82] duration metric: took 4.910566ms for pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.788064   68418 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.792321   68418 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:57.792348   68418 pod_ready.go:82] duration metric: took 4.274793ms for pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.792379   68418 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-272ln" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.797759   68418 pod_ready.go:93] pod "kube-proxy-272ln" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:57.797782   68418 pod_ready.go:82] duration metric: took 5.395909ms for pod "kube-proxy-272ln" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.797792   68418 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:58.173750   68418 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:58.173783   68418 pod_ready.go:82] duration metric: took 375.98387ms for pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:58.173793   68418 pod_ready.go:39] duration metric: took 7.917945016s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:34:58.173812   68418 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:34:58.173878   68418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:34:58.188649   68418 api_server.go:72] duration metric: took 8.206165908s to wait for apiserver process to appear ...
	I1001 20:34:58.188676   68418 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:34:58.188697   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:34:58.193752   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 200:
	ok
	I1001 20:34:58.194629   68418 api_server.go:141] control plane version: v1.31.1
	I1001 20:34:58.194646   68418 api_server.go:131] duration metric: took 5.963942ms to wait for apiserver health ...
	I1001 20:34:58.194653   68418 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:34:58.378081   68418 system_pods.go:59] 9 kube-system pods found
	I1001 20:34:58.378110   68418 system_pods.go:61] "coredns-7c65d6cfc9-8xth8" [4a6d614d-f16c-46fb-add5-610ac5895e1c] Running
	I1001 20:34:58.378115   68418 system_pods.go:61] "coredns-7c65d6cfc9-p7wbg" [13fab587-7dc4-41fc-a74c-47372725886d] Running
	I1001 20:34:58.378121   68418 system_pods.go:61] "etcd-default-k8s-diff-port-878552" [56a25509-d233-470d-888a-cf87475bf51b] Running
	I1001 20:34:58.378124   68418 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-878552" [d74bbc5a-6944-4e7b-a175-59b8ce58b359] Running
	I1001 20:34:58.378128   68418 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-878552" [5f2b8294-3146-4996-8a92-69ae08803d55] Running
	I1001 20:34:58.378131   68418 system_pods.go:61] "kube-proxy-272ln" [9f2e367f-34c7-4117-bd8e-62b5aa58c7b5] Running
	I1001 20:34:58.378134   68418 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-878552" [91e886e5-8452-4fe2-8be8-7705eeed5073] Running
	I1001 20:34:58.378140   68418 system_pods.go:61] "metrics-server-6867b74b74-75m4s" [c8eb4eb3-ea7f-4a88-8c1f-e3331f44ae53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:34:58.378143   68418 system_pods.go:61] "storage-provisioner" [bfc9ed28-f04b-4e57-b8c0-f41849e1fc25] Running
	I1001 20:34:58.378151   68418 system_pods.go:74] duration metric: took 183.491966ms to wait for pod list to return data ...
	I1001 20:34:58.378157   68418 default_sa.go:34] waiting for default service account to be created ...
	I1001 20:34:58.574257   68418 default_sa.go:45] found service account: "default"
	I1001 20:34:58.574282   68418 default_sa.go:55] duration metric: took 196.119399ms for default service account to be created ...
	I1001 20:34:58.574290   68418 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 20:34:58.776341   68418 system_pods.go:86] 9 kube-system pods found
	I1001 20:34:58.776395   68418 system_pods.go:89] "coredns-7c65d6cfc9-8xth8" [4a6d614d-f16c-46fb-add5-610ac5895e1c] Running
	I1001 20:34:58.776406   68418 system_pods.go:89] "coredns-7c65d6cfc9-p7wbg" [13fab587-7dc4-41fc-a74c-47372725886d] Running
	I1001 20:34:58.776420   68418 system_pods.go:89] "etcd-default-k8s-diff-port-878552" [56a25509-d233-470d-888a-cf87475bf51b] Running
	I1001 20:34:58.776428   68418 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-878552" [d74bbc5a-6944-4e7b-a175-59b8ce58b359] Running
	I1001 20:34:58.776438   68418 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-878552" [5f2b8294-3146-4996-8a92-69ae08803d55] Running
	I1001 20:34:58.776443   68418 system_pods.go:89] "kube-proxy-272ln" [9f2e367f-34c7-4117-bd8e-62b5aa58c7b5] Running
	I1001 20:34:58.776449   68418 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-878552" [91e886e5-8452-4fe2-8be8-7705eeed5073] Running
	I1001 20:34:58.776456   68418 system_pods.go:89] "metrics-server-6867b74b74-75m4s" [c8eb4eb3-ea7f-4a88-8c1f-e3331f44ae53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:34:58.776463   68418 system_pods.go:89] "storage-provisioner" [bfc9ed28-f04b-4e57-b8c0-f41849e1fc25] Running
	I1001 20:34:58.776471   68418 system_pods.go:126] duration metric: took 202.174994ms to wait for k8s-apps to be running ...
	I1001 20:34:58.776481   68418 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 20:34:58.776526   68418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:34:58.791729   68418 system_svc.go:56] duration metric: took 15.241394ms WaitForService to wait for kubelet
	I1001 20:34:58.791758   68418 kubeadm.go:582] duration metric: took 8.809278003s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:34:58.791774   68418 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:34:58.976076   68418 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:34:58.976102   68418 node_conditions.go:123] node cpu capacity is 2
	I1001 20:34:58.976115   68418 node_conditions.go:105] duration metric: took 184.336121ms to run NodePressure ...
	I1001 20:34:58.976127   68418 start.go:241] waiting for startup goroutines ...
	I1001 20:34:58.976136   68418 start.go:246] waiting for cluster config update ...
	I1001 20:34:58.976149   68418 start.go:255] writing updated cluster config ...
	I1001 20:34:58.976450   68418 ssh_runner.go:195] Run: rm -f paused
	I1001 20:34:59.026367   68418 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 20:34:59.029055   68418 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-878552" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.047870708Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815124047836102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36bc6022-be38-40f8-8146-1bdd7d75bc5f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.048844881Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b67cb22f-cb19-4a56-8ef5-02c27bbfa3d4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.048927838Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b67cb22f-cb19-4a56-8ef5-02c27bbfa3d4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.048987654Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b67cb22f-cb19-4a56-8ef5-02c27bbfa3d4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.084226978Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=928002cc-e546-442a-9556-49fe0b0c6fc8 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.084323667Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=928002cc-e546-442a-9556-49fe0b0c6fc8 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.085717270Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc3ddd97-fb57-476d-9fa3-5a99c2082844 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.086310491Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815124086278776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc3ddd97-fb57-476d-9fa3-5a99c2082844 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.086967387Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=56e7ef93-3756-47bc-9830-911c15c3492f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.087038091Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=56e7ef93-3756-47bc-9830-911c15c3492f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.087101779Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=56e7ef93-3756-47bc-9830-911c15c3492f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.117614470Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=56d88f5b-408e-4d27-95cc-0d5917defede name=/runtime.v1.RuntimeService/Version
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.117706219Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=56d88f5b-408e-4d27-95cc-0d5917defede name=/runtime.v1.RuntimeService/Version
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.118639025Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=517b8feb-1e94-4c87-b140-efc804abd827 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.119044487Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815124119023516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=517b8feb-1e94-4c87-b140-efc804abd827 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.119747135Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb90a92d-953f-4c97-a38a-414020671853 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.119812583Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb90a92d-953f-4c97-a38a-414020671853 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.119852144Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bb90a92d-953f-4c97-a38a-414020671853 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.154429262Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fdb8874e-4027-4be1-b643-c4866918ca57 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.154503889Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fdb8874e-4027-4be1-b643-c4866918ca57 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.155611397Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f9b3546-cc18-418c-8bb5-317eb484465d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.156015494Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815124155988844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f9b3546-cc18-418c-8bb5-317eb484465d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.156539761Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=461f7424-7535-4824-9896-75bf36a2199d name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.156620017Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=461f7424-7535-4824-9896-75bf36a2199d name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:38:44 old-k8s-version-359369 crio[632]: time="2024-10-01 20:38:44.156652353Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=461f7424-7535-4824-9896-75bf36a2199d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 1 20:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.061451] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043514] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.028959] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.047745] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.355137] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.538724] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.065709] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077031] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.174087] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.145035] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.248393] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +6.785134] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.069182] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.078495] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[ +11.012728] kauditd_printk_skb: 46 callbacks suppressed
	[Oct 1 20:25] systemd-fstab-generator[5075]: Ignoring "noauto" option for root device
	[Oct 1 20:27] systemd-fstab-generator[5356]: Ignoring "noauto" option for root device
	[  +0.061063] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:38:44 up 17 min,  0 users,  load average: 0.16, 0.07, 0.01
	Linux old-k8s-version-359369 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 01 20:38:41 old-k8s-version-359369 kubelet[6530]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001020c0, 0xc00084d7a0)
	Oct 01 20:38:41 old-k8s-version-359369 kubelet[6530]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Oct 01 20:38:41 old-k8s-version-359369 kubelet[6530]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Oct 01 20:38:41 old-k8s-version-359369 kubelet[6530]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Oct 01 20:38:41 old-k8s-version-359369 kubelet[6530]: goroutine 145 [select]:
	Oct 01 20:38:41 old-k8s-version-359369 kubelet[6530]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000753ef0, 0x4f0ac20, 0xc000bf5770, 0x1, 0xc0001020c0)
	Oct 01 20:38:41 old-k8s-version-359369 kubelet[6530]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Oct 01 20:38:41 old-k8s-version-359369 kubelet[6530]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000d8460, 0xc0001020c0)
	Oct 01 20:38:41 old-k8s-version-359369 kubelet[6530]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Oct 01 20:38:41 old-k8s-version-359369 kubelet[6530]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Oct 01 20:38:41 old-k8s-version-359369 kubelet[6530]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Oct 01 20:38:41 old-k8s-version-359369 kubelet[6530]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c8ae50, 0xc00036d2c0)
	Oct 01 20:38:41 old-k8s-version-359369 kubelet[6530]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Oct 01 20:38:41 old-k8s-version-359369 kubelet[6530]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Oct 01 20:38:41 old-k8s-version-359369 kubelet[6530]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Oct 01 20:38:41 old-k8s-version-359369 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 01 20:38:41 old-k8s-version-359369 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 01 20:38:41 old-k8s-version-359369 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Oct 01 20:38:41 old-k8s-version-359369 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 01 20:38:41 old-k8s-version-359369 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 01 20:38:41 old-k8s-version-359369 kubelet[6539]: I1001 20:38:41.952557    6539 server.go:416] Version: v1.20.0
	Oct 01 20:38:41 old-k8s-version-359369 kubelet[6539]: I1001 20:38:41.952895    6539 server.go:837] Client rotation is on, will bootstrap in background
	Oct 01 20:38:41 old-k8s-version-359369 kubelet[6539]: I1001 20:38:41.954772    6539 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 01 20:38:41 old-k8s-version-359369 kubelet[6539]: W1001 20:38:41.955788    6539 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 01 20:38:41 old-k8s-version-359369 kubelet[6539]: I1001 20:38:41.955847    6539 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-359369 -n old-k8s-version-359369
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-359369 -n old-k8s-version-359369: exit status 2 (220.02176ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-359369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-878552 -n default-k8s-diff-port-878552
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-01 20:43:59.614051979 +0000 UTC m=+6590.562855377
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-878552 -n default-k8s-diff-port-878552
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-878552 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-878552 logs -n 25: (2.264186388s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-983557 sudo                               | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC | 01 Oct 24 20:43 UTC |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-983557 sudo                               | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC | 01 Oct 24 20:43 UTC |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-983557 sudo                               | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC | 01 Oct 24 20:43 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-983557 sudo cat                           | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC | 01 Oct 24 20:43 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-983557 sudo cat                           | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC | 01 Oct 24 20:43 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-983557 sudo                               | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-983557 sudo                               | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC | 01 Oct 24 20:43 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-983557 sudo cat                           | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC | 01 Oct 24 20:43 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-983557 sudo docker                        | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-983557 sudo                               | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-983557 sudo                               | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC | 01 Oct 24 20:43 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-983557 sudo cat                           | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-983557 sudo cat                           | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC | 01 Oct 24 20:43 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-983557 sudo                               | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC | 01 Oct 24 20:43 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-983557 sudo                               | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-983557 sudo                               | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC | 01 Oct 24 20:43 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-983557 sudo cat                           | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC | 01 Oct 24 20:43 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-983557 sudo cat                           | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC | 01 Oct 24 20:43 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-983557 sudo                               | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC | 01 Oct 24 20:43 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-983557 sudo                               | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC | 01 Oct 24 20:43 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-983557 sudo                               | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC | 01 Oct 24 20:43 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-983557 sudo find                          | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC | 01 Oct 24 20:43 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-983557 sudo crio                          | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC | 01 Oct 24 20:43 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-983557                                    | kindnet-983557            | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC | 01 Oct 24 20:43 UTC |
	| start   | -p enable-default-cni-983557                         | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:43 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 20:43:56
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 20:43:56.971769   78160 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:43:56.972115   78160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:43:56.972123   78160 out.go:358] Setting ErrFile to fd 2...
	I1001 20:43:56.972128   78160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:43:56.972447   78160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 20:43:56.973170   78160 out.go:352] Setting JSON to false
	I1001 20:43:56.974408   78160 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8779,"bootTime":1727806658,"procs":291,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 20:43:56.974521   78160 start.go:139] virtualization: kvm guest
	I1001 20:43:57.116926   78160 out.go:177] * [enable-default-cni-983557] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 20:43:57.235497   78160 notify.go:220] Checking for updates...
	I1001 20:43:57.271053   78160 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 20:43:57.397138   78160 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 20:43:57.473985   78160 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:43:57.505767   78160 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:43:57.531750   78160 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 20:43:57.532925   78160 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 20:43:57.534487   78160 config.go:182] Loaded profile config "calico-983557": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:43:57.534583   78160 config.go:182] Loaded profile config "custom-flannel-983557": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:43:57.534671   78160 config.go:182] Loaded profile config "default-k8s-diff-port-878552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:43:57.534752   78160 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 20:43:57.580657   78160 out.go:177] * Using the kvm2 driver based on user configuration
	I1001 20:43:57.581822   78160 start.go:297] selected driver: kvm2
	I1001 20:43:57.581847   78160 start.go:901] validating driver "kvm2" against <nil>
	I1001 20:43:57.581863   78160 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 20:43:57.582969   78160 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:43:57.583054   78160 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 20:43:57.599110   78160 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 20:43:57.599168   78160 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E1001 20:43:57.599487   78160 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1001 20:43:57.599516   78160 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:43:57.599548   78160 cni.go:84] Creating CNI manager for "bridge"
	I1001 20:43:57.599555   78160 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 20:43:57.599612   78160 start.go:340] cluster config:
	{Name:enable-default-cni-983557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-983557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:43:57.599750   78160 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:43:57.601199   78160 out.go:177] * Starting "enable-default-cni-983557" primary control-plane node in "enable-default-cni-983557" cluster
	I1001 20:43:57.602232   78160 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 20:43:57.602284   78160 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 20:43:57.602294   78160 cache.go:56] Caching tarball of preloaded images
	I1001 20:43:57.602384   78160 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 20:43:57.602396   78160 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 20:43:57.602520   78160 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/enable-default-cni-983557/config.json ...
	I1001 20:43:57.602543   78160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/enable-default-cni-983557/config.json: {Name:mk0edb44ef68ab56db43aea02f9fe3390446c4c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:43:57.602719   78160 start.go:360] acquireMachinesLock for enable-default-cni-983557: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 20:43:57.602768   78160 start.go:364] duration metric: took 22.596µs to acquireMachinesLock for "enable-default-cni-983557"
	I1001 20:43:57.602790   78160 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-983557 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-983557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 20:43:57.602875   78160 start.go:125] createHost starting for "" (driver="kvm2")
	I1001 20:43:54.492913   74837 pod_ready.go:103] pod "calico-kube-controllers-b8d8894fb-kx5td" in "kube-system" namespace has status "Ready":"False"
	I1001 20:43:56.985085   74837 pod_ready.go:103] pod "calico-kube-controllers-b8d8894fb-kx5td" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.172626160Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815441172597346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7371ec9-3473-44ee-85b2-bb26aba16d5c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.173403079Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c7f3755c-5c32-4eb4-939e-eac433919306 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.173456518Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c7f3755c-5c32-4eb4-939e-eac433919306 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.173663237Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b53d014fc93fa0d3c13ceba3250b8c17ddc9ad02efc11dcbb47175016d6297ff,PodSandboxId:598750c0ae0cb93ab06050ea53cba530205abbf908fc993c5cb87d9894f374d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814891812340675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfc9ed28-f04b-4e57-b8c0-f41849e1fc25,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13a5f7d3522ffc7d818e6263e8be652ef9699e1486880679368868a1f71b564,PodSandboxId:b8a21f346637326021ef7a70f5a232773987fef9f2da2efafce562f52367f6bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814890973191220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8xth8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a6d614d-f16c-46fb-add5-610ac5895e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e01d737bdedb55720ca53291e44205848456f41a907f0173e5922cfcb152f88,PodSandboxId:4b49d180746c05c865b79dc9b53c4701800e0e235b38bf0ffdf3bd16572799a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814890888660452,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-p7wbg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 13fab587-7dc4-41fc-a74c-47372725886d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f3179c90451f3bf47ed5365f8acfe350f4c4869367228a274bc9aed4b567625,PodSandboxId:856d0b0a067384ca0d19d20676b63ca60e34cf228e1862a9a0dca2cbf072ccfc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727814889820907760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-272ln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f2e367f-34c7-4117-bd8e-62b5aa58c7b5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78323440e4e9503b9fb29943c7128695c7518927053b3ad9b42b1aec8791a06d,PodSandboxId:4b842cf4ed836a35d4b86a43bd061253be7012c284075dd31a9a0043e8938f7a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172781487941409928
1,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ecc38e878fb93372a1105602ba5a781,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e865c2ca51f7ac9f6f501addebbe067f008a1aeafe5b80151686573c901539,PodSandboxId:6a1db95ce778961b95683aaab9840b45115917fd22329537f01b5f2bbed37413,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17278148794
14188852,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a542dd8aa2a552cd0f039e06a69c5b4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf139846ac2ddf91d7972ee2fe7b5419a6092ce8690a62daefdc19a587cae285,PodSandboxId:45c80871a3e1d84603784d76845977b542b603e2af717f989c7245339a96ef0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17278
14879363633239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c78d46056165d65e06340ab745db5b2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00b2f009a8ed9caf9c147fe463b4f73e62fcd28260bd2c467e4593a67500fe4,PodSandboxId:fc682544086ad0e29f344297aa932f46f46dfb8be0e8db6bc3d655123c4bf4d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814879353630104,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1a8ab2d4c77a09951889ae8c20de084,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ba7369fdd09ffc169c1a57256c1a30ba40cdfc2d480833758b899fda456d1f,PodSandboxId:b12c9d753b6065d694f81837cbd796620f4501e6cf16b45ab2f59e0b5dbbc3b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727814593018325051,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ecc38e878fb93372a1105602ba5a781,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c7f3755c-5c32-4eb4-939e-eac433919306 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.215380208Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad2ae780-7c75-4f4a-b68c-cd0b4fc7f73e name=/runtime.v1.RuntimeService/Version
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.215478225Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad2ae780-7c75-4f4a-b68c-cd0b4fc7f73e name=/runtime.v1.RuntimeService/Version
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.217069059Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76174154-9460-4c8a-ad24-a802c278d876 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.217882852Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815441217848566,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76174154-9460-4c8a-ad24-a802c278d876 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.218724117Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0430033a-8aa6-4583-b1de-e67e8796e49d name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.218830677Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0430033a-8aa6-4583-b1de-e67e8796e49d name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.219102518Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b53d014fc93fa0d3c13ceba3250b8c17ddc9ad02efc11dcbb47175016d6297ff,PodSandboxId:598750c0ae0cb93ab06050ea53cba530205abbf908fc993c5cb87d9894f374d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814891812340675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfc9ed28-f04b-4e57-b8c0-f41849e1fc25,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13a5f7d3522ffc7d818e6263e8be652ef9699e1486880679368868a1f71b564,PodSandboxId:b8a21f346637326021ef7a70f5a232773987fef9f2da2efafce562f52367f6bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814890973191220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8xth8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a6d614d-f16c-46fb-add5-610ac5895e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e01d737bdedb55720ca53291e44205848456f41a907f0173e5922cfcb152f88,PodSandboxId:4b49d180746c05c865b79dc9b53c4701800e0e235b38bf0ffdf3bd16572799a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814890888660452,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-p7wbg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 13fab587-7dc4-41fc-a74c-47372725886d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f3179c90451f3bf47ed5365f8acfe350f4c4869367228a274bc9aed4b567625,PodSandboxId:856d0b0a067384ca0d19d20676b63ca60e34cf228e1862a9a0dca2cbf072ccfc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727814889820907760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-272ln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f2e367f-34c7-4117-bd8e-62b5aa58c7b5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78323440e4e9503b9fb29943c7128695c7518927053b3ad9b42b1aec8791a06d,PodSandboxId:4b842cf4ed836a35d4b86a43bd061253be7012c284075dd31a9a0043e8938f7a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172781487941409928
1,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ecc38e878fb93372a1105602ba5a781,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e865c2ca51f7ac9f6f501addebbe067f008a1aeafe5b80151686573c901539,PodSandboxId:6a1db95ce778961b95683aaab9840b45115917fd22329537f01b5f2bbed37413,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17278148794
14188852,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a542dd8aa2a552cd0f039e06a69c5b4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf139846ac2ddf91d7972ee2fe7b5419a6092ce8690a62daefdc19a587cae285,PodSandboxId:45c80871a3e1d84603784d76845977b542b603e2af717f989c7245339a96ef0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17278
14879363633239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c78d46056165d65e06340ab745db5b2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00b2f009a8ed9caf9c147fe463b4f73e62fcd28260bd2c467e4593a67500fe4,PodSandboxId:fc682544086ad0e29f344297aa932f46f46dfb8be0e8db6bc3d655123c4bf4d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814879353630104,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1a8ab2d4c77a09951889ae8c20de084,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ba7369fdd09ffc169c1a57256c1a30ba40cdfc2d480833758b899fda456d1f,PodSandboxId:b12c9d753b6065d694f81837cbd796620f4501e6cf16b45ab2f59e0b5dbbc3b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727814593018325051,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ecc38e878fb93372a1105602ba5a781,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0430033a-8aa6-4583-b1de-e67e8796e49d name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.263141374Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=923a2b10-f69d-4282-96da-e9c44ce60c07 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.263323779Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=923a2b10-f69d-4282-96da-e9c44ce60c07 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.265390995Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0bbf3a1f-533a-4f2a-a224-47470fd8b7b6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.266050503Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815441266018792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0bbf3a1f-533a-4f2a-a224-47470fd8b7b6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.266891688Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=52873581-8f6c-43d9-8d45-809ee11751c4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.266991179Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=52873581-8f6c-43d9-8d45-809ee11751c4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.267676385Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b53d014fc93fa0d3c13ceba3250b8c17ddc9ad02efc11dcbb47175016d6297ff,PodSandboxId:598750c0ae0cb93ab06050ea53cba530205abbf908fc993c5cb87d9894f374d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814891812340675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfc9ed28-f04b-4e57-b8c0-f41849e1fc25,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13a5f7d3522ffc7d818e6263e8be652ef9699e1486880679368868a1f71b564,PodSandboxId:b8a21f346637326021ef7a70f5a232773987fef9f2da2efafce562f52367f6bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814890973191220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8xth8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a6d614d-f16c-46fb-add5-610ac5895e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e01d737bdedb55720ca53291e44205848456f41a907f0173e5922cfcb152f88,PodSandboxId:4b49d180746c05c865b79dc9b53c4701800e0e235b38bf0ffdf3bd16572799a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814890888660452,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-p7wbg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 13fab587-7dc4-41fc-a74c-47372725886d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f3179c90451f3bf47ed5365f8acfe350f4c4869367228a274bc9aed4b567625,PodSandboxId:856d0b0a067384ca0d19d20676b63ca60e34cf228e1862a9a0dca2cbf072ccfc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727814889820907760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-272ln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f2e367f-34c7-4117-bd8e-62b5aa58c7b5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78323440e4e9503b9fb29943c7128695c7518927053b3ad9b42b1aec8791a06d,PodSandboxId:4b842cf4ed836a35d4b86a43bd061253be7012c284075dd31a9a0043e8938f7a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172781487941409928
1,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ecc38e878fb93372a1105602ba5a781,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e865c2ca51f7ac9f6f501addebbe067f008a1aeafe5b80151686573c901539,PodSandboxId:6a1db95ce778961b95683aaab9840b45115917fd22329537f01b5f2bbed37413,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17278148794
14188852,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a542dd8aa2a552cd0f039e06a69c5b4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf139846ac2ddf91d7972ee2fe7b5419a6092ce8690a62daefdc19a587cae285,PodSandboxId:45c80871a3e1d84603784d76845977b542b603e2af717f989c7245339a96ef0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17278
14879363633239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c78d46056165d65e06340ab745db5b2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00b2f009a8ed9caf9c147fe463b4f73e62fcd28260bd2c467e4593a67500fe4,PodSandboxId:fc682544086ad0e29f344297aa932f46f46dfb8be0e8db6bc3d655123c4bf4d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814879353630104,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1a8ab2d4c77a09951889ae8c20de084,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ba7369fdd09ffc169c1a57256c1a30ba40cdfc2d480833758b899fda456d1f,PodSandboxId:b12c9d753b6065d694f81837cbd796620f4501e6cf16b45ab2f59e0b5dbbc3b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727814593018325051,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ecc38e878fb93372a1105602ba5a781,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=52873581-8f6c-43d9-8d45-809ee11751c4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.319077785Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee1c5954-1301-406f-9a6c-36de97b17e41 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.319173589Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee1c5954-1301-406f-9a6c-36de97b17e41 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.320344184Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3622cc1-2a18-495d-83be-b0d3df6e5c1d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.320998369Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815441320958061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3622cc1-2a18-495d-83be-b0d3df6e5c1d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.321833426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b189a8b8-dc14-4519-b5ac-5b577305e717 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.321917233Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b189a8b8-dc14-4519-b5ac-5b577305e717 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:44:01 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:44:01.322210276Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b53d014fc93fa0d3c13ceba3250b8c17ddc9ad02efc11dcbb47175016d6297ff,PodSandboxId:598750c0ae0cb93ab06050ea53cba530205abbf908fc993c5cb87d9894f374d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814891812340675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfc9ed28-f04b-4e57-b8c0-f41849e1fc25,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13a5f7d3522ffc7d818e6263e8be652ef9699e1486880679368868a1f71b564,PodSandboxId:b8a21f346637326021ef7a70f5a232773987fef9f2da2efafce562f52367f6bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814890973191220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8xth8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a6d614d-f16c-46fb-add5-610ac5895e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e01d737bdedb55720ca53291e44205848456f41a907f0173e5922cfcb152f88,PodSandboxId:4b49d180746c05c865b79dc9b53c4701800e0e235b38bf0ffdf3bd16572799a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814890888660452,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-p7wbg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 13fab587-7dc4-41fc-a74c-47372725886d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f3179c90451f3bf47ed5365f8acfe350f4c4869367228a274bc9aed4b567625,PodSandboxId:856d0b0a067384ca0d19d20676b63ca60e34cf228e1862a9a0dca2cbf072ccfc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727814889820907760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-272ln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f2e367f-34c7-4117-bd8e-62b5aa58c7b5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78323440e4e9503b9fb29943c7128695c7518927053b3ad9b42b1aec8791a06d,PodSandboxId:4b842cf4ed836a35d4b86a43bd061253be7012c284075dd31a9a0043e8938f7a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172781487941409928
1,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ecc38e878fb93372a1105602ba5a781,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e865c2ca51f7ac9f6f501addebbe067f008a1aeafe5b80151686573c901539,PodSandboxId:6a1db95ce778961b95683aaab9840b45115917fd22329537f01b5f2bbed37413,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17278148794
14188852,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a542dd8aa2a552cd0f039e06a69c5b4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf139846ac2ddf91d7972ee2fe7b5419a6092ce8690a62daefdc19a587cae285,PodSandboxId:45c80871a3e1d84603784d76845977b542b603e2af717f989c7245339a96ef0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17278
14879363633239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c78d46056165d65e06340ab745db5b2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00b2f009a8ed9caf9c147fe463b4f73e62fcd28260bd2c467e4593a67500fe4,PodSandboxId:fc682544086ad0e29f344297aa932f46f46dfb8be0e8db6bc3d655123c4bf4d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814879353630104,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1a8ab2d4c77a09951889ae8c20de084,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ba7369fdd09ffc169c1a57256c1a30ba40cdfc2d480833758b899fda456d1f,PodSandboxId:b12c9d753b6065d694f81837cbd796620f4501e6cf16b45ab2f59e0b5dbbc3b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727814593018325051,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ecc38e878fb93372a1105602ba5a781,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b189a8b8-dc14-4519-b5ac-5b577305e717 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b53d014fc93fa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   598750c0ae0cb       storage-provisioner
	b13a5f7d3522f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   b8a21f3466373       coredns-7c65d6cfc9-8xth8
	7e01d737bdedb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   4b49d180746c0       coredns-7c65d6cfc9-p7wbg
	5f3179c90451f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   856d0b0a06738       kube-proxy-272ln
	e9e865c2ca51f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   6a1db95ce7789       kube-controller-manager-default-k8s-diff-port-878552
	78323440e4e95       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   4b842cf4ed836       kube-apiserver-default-k8s-diff-port-878552
	cf139846ac2dd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   45c80871a3e1d       etcd-default-k8s-diff-port-878552
	d00b2f009a8ed       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   fc682544086ad       kube-scheduler-default-k8s-diff-port-878552
	90ba7369fdd09       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   b12c9d753b606       kube-apiserver-default-k8s-diff-port-878552
	
	
	==> coredns [7e01d737bdedb55720ca53291e44205848456f41a907f0173e5922cfcb152f88] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [b13a5f7d3522ffc7d818e6263e8be652ef9699e1486880679368868a1f71b564] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-878552
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-878552
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=default-k8s-diff-port-878552
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T20_34_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 20:34:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-878552
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 20:43:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 20:40:00 +0000   Tue, 01 Oct 2024 20:34:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 20:40:00 +0000   Tue, 01 Oct 2024 20:34:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 20:40:00 +0000   Tue, 01 Oct 2024 20:34:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 20:40:00 +0000   Tue, 01 Oct 2024 20:34:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.4
	  Hostname:    default-k8s-diff-port-878552
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 16251e0cf7b04633be33e6ffa535a6a6
	  System UUID:                16251e0c-f7b0-4633-be33-e6ffa535a6a6
	  Boot ID:                    d0f8220a-f43b-4b0a-8271-fa5e5ab0d62f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-8xth8                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 coredns-7c65d6cfc9-p7wbg                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 etcd-default-k8s-diff-port-878552                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m16s
	  kube-system                 kube-apiserver-default-k8s-diff-port-878552             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-878552    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-proxy-272ln                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-scheduler-default-k8s-diff-port-878552             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 metrics-server-6867b74b74-75m4s                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m10s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m11s  kube-proxy       
	  Normal  Starting                 9m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m16s  kubelet          Node default-k8s-diff-port-878552 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m16s  kubelet          Node default-k8s-diff-port-878552 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m16s  kubelet          Node default-k8s-diff-port-878552 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m13s  node-controller  Node default-k8s-diff-port-878552 event: Registered Node default-k8s-diff-port-878552 in Controller
	
	
	==> dmesg <==
	[  +0.053617] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039786] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.884480] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.886563] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.466562] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.378007] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.067603] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.080377] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.196728] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.123874] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.304203] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[  +3.984537] systemd-fstab-generator[795]: Ignoring "noauto" option for root device
	[  +2.164540] systemd-fstab-generator[914]: Ignoring "noauto" option for root device
	[  +0.064099] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.534744] kauditd_printk_skb: 69 callbacks suppressed
	[Oct 1 20:30] kauditd_printk_skb: 85 callbacks suppressed
	[Oct 1 20:34] systemd-fstab-generator[2567]: Ignoring "noauto" option for root device
	[  +0.063611] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.485904] systemd-fstab-generator[2884]: Ignoring "noauto" option for root device
	[  +0.080290] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.075473] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.323939] systemd-fstab-generator[3072]: Ignoring "noauto" option for root device
	[  +4.679912] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [cf139846ac2ddf91d7972ee2fe7b5419a6092ce8690a62daefdc19a587cae285] <==
	{"level":"warn","ts":"2024-10-01T20:41:44.759050Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T20:41:44.231714Z","time spent":"527.332737ms","remote":"127.0.0.1:40400","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-10-01T20:41:44.759284Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"493.072119ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T20:41:44.759324Z","caller":"traceutil/trace.go:171","msg":"trace[279298348] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:789; }","duration":"493.163501ms","start":"2024-10-01T20:41:44.266152Z","end":"2024-10-01T20:41:44.759315Z","steps":["trace[279298348] 'agreement among raft nodes before linearized reading'  (duration: 493.044711ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:41:44.759356Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T20:41:44.266116Z","time spent":"493.233764ms","remote":"127.0.0.1:40550","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-10-01T20:41:45.003708Z","caller":"traceutil/trace.go:171","msg":"trace[810356461] transaction","detail":"{read_only:false; response_revision:791; number_of_response:1; }","duration":"184.525744ms","start":"2024-10-01T20:41:44.819168Z","end":"2024-10-01T20:41:45.003694Z","steps":["trace[810356461] 'process raft request'  (duration: 112.320691ms)","trace[810356461] 'compare'  (duration: 71.614412ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-01T20:42:25.609610Z","caller":"traceutil/trace.go:171","msg":"trace[74741108] transaction","detail":"{read_only:false; response_revision:822; number_of_response:1; }","duration":"380.500562ms","start":"2024-10-01T20:42:25.229092Z","end":"2024-10-01T20:42:25.609592Z","steps":["trace[74741108] 'process raft request'  (duration: 380.291475ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:42:25.609784Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T20:42:25.229071Z","time spent":"380.627375ms","remote":"127.0.0.1:40542","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:820 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-10-01T20:42:25.609964Z","caller":"traceutil/trace.go:171","msg":"trace[484972863] linearizableReadLoop","detail":"{readStateIndex:928; appliedIndex:928; }","duration":"344.78373ms","start":"2024-10-01T20:42:25.265164Z","end":"2024-10-01T20:42:25.609948Z","steps":["trace[484972863] 'read index received'  (duration: 344.772808ms)","trace[484972863] 'applied index is now lower than readState.Index'  (duration: 6.839µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-01T20:42:25.610132Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"344.9393ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T20:42:25.610953Z","caller":"traceutil/trace.go:171","msg":"trace[1976103501] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:822; }","duration":"345.765417ms","start":"2024-10-01T20:42:25.265160Z","end":"2024-10-01T20:42:25.610925Z","steps":["trace[1976103501] 'agreement among raft nodes before linearized reading'  (duration: 344.916317ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:42:25.611053Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T20:42:25.265126Z","time spent":"345.910658ms","remote":"127.0.0.1:40550","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-01T20:42:25.868657Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.910737ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15059353574629306791 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-878552\" mod_revision:815 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-878552\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-878552\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-01T20:42:25.868753Z","caller":"traceutil/trace.go:171","msg":"trace[784223157] transaction","detail":"{read_only:false; response_revision:823; number_of_response:1; }","duration":"530.21241ms","start":"2024-10-01T20:42:25.338525Z","end":"2024-10-01T20:42:25.868738Z","steps":["trace[784223157] 'process raft request'  (duration: 401.148112ms)","trace[784223157] 'compare'  (duration: 128.808679ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-01T20:42:25.868812Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T20:42:25.338509Z","time spent":"530.27818ms","remote":"127.0.0.1:40630","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":601,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-878552\" mod_revision:815 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-878552\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-878552\" > >"}
	{"level":"warn","ts":"2024-10-01T20:42:50.244981Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.867724ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T20:42:50.245950Z","caller":"traceutil/trace.go:171","msg":"trace[942953614] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:841; }","duration":"300.862256ms","start":"2024-10-01T20:42:49.945062Z","end":"2024-10-01T20:42:50.245924Z","steps":["trace[942953614] 'range keys from in-memory index tree'  (duration: 299.849849ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:43:26.393174Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.29363ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T20:43:26.393359Z","caller":"traceutil/trace.go:171","msg":"trace[2103709342] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:870; }","duration":"130.544512ms","start":"2024-10-01T20:43:26.262799Z","end":"2024-10-01T20:43:26.393344Z","steps":["trace[2103709342] 'range keys from in-memory index tree'  (duration: 130.221298ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T20:43:27.210639Z","caller":"traceutil/trace.go:171","msg":"trace[1123025172] linearizableReadLoop","detail":"{readStateIndex:989; appliedIndex:988; }","duration":"266.088641ms","start":"2024-10-01T20:43:26.944531Z","end":"2024-10-01T20:43:27.210620Z","steps":["trace[1123025172] 'read index received'  (duration: 265.837576ms)","trace[1123025172] 'applied index is now lower than readState.Index'  (duration: 250.347µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-01T20:43:27.210824Z","caller":"traceutil/trace.go:171","msg":"trace[114279920] transaction","detail":"{read_only:false; response_revision:871; number_of_response:1; }","duration":"304.78287ms","start":"2024-10-01T20:43:26.906021Z","end":"2024-10-01T20:43:27.210804Z","steps":["trace[114279920] 'process raft request'  (duration: 304.442632ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:43:27.210960Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T20:43:26.906004Z","time spent":"304.855523ms","remote":"127.0.0.1:40630","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":601,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-878552\" mod_revision:863 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-878552\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-878552\" > >"}
	{"level":"warn","ts":"2024-10-01T20:43:27.211097Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"266.564065ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T20:43:27.211130Z","caller":"traceutil/trace.go:171","msg":"trace[1282754303] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:871; }","duration":"266.601577ms","start":"2024-10-01T20:43:26.944523Z","end":"2024-10-01T20:43:27.211124Z","steps":["trace[1282754303] 'agreement among raft nodes before linearized reading'  (duration: 266.552846ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:43:28.452156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.273458ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T20:43:28.452287Z","caller":"traceutil/trace.go:171","msg":"trace[1232563570] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:872; }","duration":"189.412632ms","start":"2024-10-01T20:43:28.262814Z","end":"2024-10-01T20:43:28.452227Z","steps":["trace[1232563570] 'range keys from in-memory index tree'  (duration: 188.982522ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:44:01 up 14 min,  0 users,  load average: 0.14, 0.21, 0.13
	Linux default-k8s-diff-port-878552 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [78323440e4e9503b9fb29943c7128695c7518927053b3ad9b42b1aec8791a06d] <==
	W1001 20:39:43.075820       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:39:43.075985       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1001 20:39:43.076873       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1001 20:39:43.078136       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1001 20:40:43.078119       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:40:43.078423       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1001 20:40:43.078538       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:40:43.078693       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1001 20:40:43.080469       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1001 20:40:43.080532       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1001 20:42:43.081406       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:42:43.081554       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1001 20:42:43.081634       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:42:43.081648       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1001 20:42:43.082744       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1001 20:42:43.082755       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [90ba7369fdd09ffc169c1a57256c1a30ba40cdfc2d480833758b899fda456d1f] <==
	W1001 20:34:32.962840       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.022743       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.069281       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.201295       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.220163       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.238871       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.282984       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.303725       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.333862       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.377032       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.382492       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.390141       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.441686       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.534763       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.594004       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.610520       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.682412       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.721800       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.779607       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.821015       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.828493       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.862357       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.964553       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:34.053428       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:34.099899       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [e9e865c2ca51f7ac9f6f501addebbe067f008a1aeafe5b80151686573c901539] <==
	E1001 20:38:48.941682       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:38:49.487377       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:39:18.950012       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:39:19.496874       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:39:48.956835       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:39:49.508409       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1001 20:40:00.709645       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-878552"
	E1001 20:40:18.963791       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:40:19.531146       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:40:48.972351       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:40:49.541530       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1001 20:40:51.035186       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="270.748µs"
	I1001 20:41:03.926375       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="244.972µs"
	E1001 20:41:18.979537       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:41:19.550767       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:41:48.986747       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:41:49.560134       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:42:18.994487       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:42:19.575782       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:42:49.001980       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:42:49.585396       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:43:19.010127       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:43:19.596227       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:43:49.017907       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:43:49.609228       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [5f3179c90451f3bf47ed5365f8acfe350f4c4869367228a274bc9aed4b567625] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 20:34:50.274174       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 20:34:50.301186       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.4"]
	E1001 20:34:50.301312       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 20:34:50.379295       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 20:34:50.379353       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 20:34:50.379381       1 server_linux.go:169] "Using iptables Proxier"
	I1001 20:34:50.389552       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 20:34:50.389890       1 server.go:483] "Version info" version="v1.31.1"
	I1001 20:34:50.389914       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 20:34:50.394729       1 config.go:199] "Starting service config controller"
	I1001 20:34:50.394786       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 20:34:50.394818       1 config.go:105] "Starting endpoint slice config controller"
	I1001 20:34:50.394822       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 20:34:50.398198       1 config.go:328] "Starting node config controller"
	I1001 20:34:50.399054       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 20:34:50.495188       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 20:34:50.495266       1 shared_informer.go:320] Caches are synced for service config
	I1001 20:34:50.499750       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d00b2f009a8ed9caf9c147fe463b4f73e62fcd28260bd2c467e4593a67500fe4] <==
	W1001 20:34:42.096178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1001 20:34:42.097466       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:34:42.949684       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 20:34:42.950174       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:34:42.961060       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 20:34:42.961223       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:34:43.008286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1001 20:34:43.008351       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 20:34:43.021863       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1001 20:34:43.023488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 20:34:43.109610       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1001 20:34:43.109688       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:34:43.115753       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1001 20:34:43.115848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:34:43.118986       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1001 20:34:43.119065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:34:43.279611       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1001 20:34:43.279812       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1001 20:34:43.335071       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1001 20:34:43.335179       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:34:43.350960       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1001 20:34:43.351080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 20:34:43.353463       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1001 20:34:43.353546       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1001 20:34:46.157590       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 01 20:42:50 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:42:50.908928    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-75m4s" podUID="c8eb4eb3-ea7f-4a88-8c1f-e3331f44ae53"
	Oct 01 20:42:55 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:42:55.039517    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815375038913493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:42:55 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:42:55.039571    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815375038913493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:43:04 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:43:04.903805    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-75m4s" podUID="c8eb4eb3-ea7f-4a88-8c1f-e3331f44ae53"
	Oct 01 20:43:05 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:43:05.042354    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815385041936596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:43:05 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:43:05.042408    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815385041936596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:43:15 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:43:15.044743    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815395044327564,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:43:15 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:43:15.044809    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815395044327564,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:43:16 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:43:16.904173    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-75m4s" podUID="c8eb4eb3-ea7f-4a88-8c1f-e3331f44ae53"
	Oct 01 20:43:25 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:43:25.046466    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815405045857182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:43:25 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:43:25.046961    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815405045857182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:43:31 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:43:31.902657    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-75m4s" podUID="c8eb4eb3-ea7f-4a88-8c1f-e3331f44ae53"
	Oct 01 20:43:35 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:43:35.049119    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815415048661846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:43:35 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:43:35.049174    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815415048661846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:43:44 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:43:44.911995    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-75m4s" podUID="c8eb4eb3-ea7f-4a88-8c1f-e3331f44ae53"
	Oct 01 20:43:44 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:43:44.928136    2891 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 20:43:44 default-k8s-diff-port-878552 kubelet[2891]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 20:43:44 default-k8s-diff-port-878552 kubelet[2891]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 20:43:44 default-k8s-diff-port-878552 kubelet[2891]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 20:43:44 default-k8s-diff-port-878552 kubelet[2891]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 20:43:45 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:43:45.052814    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815425051821978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:43:45 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:43:45.052848    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815425051821978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:43:55 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:43:55.053949    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815435053661929,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:43:55 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:43:55.054007    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815435053661929,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:43:57 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:43:57.902960    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-75m4s" podUID="c8eb4eb3-ea7f-4a88-8c1f-e3331f44ae53"
	
	
	==> storage-provisioner [b53d014fc93fa0d3c13ceba3250b8c17ddc9ad02efc11dcbb47175016d6297ff] <==
	I1001 20:34:51.930775       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 20:34:51.947934       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 20:34:51.948094       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 20:34:51.956579       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 20:34:51.956769       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-878552_28b27df9-336d-4270-b7ee-fabafab5d940!
	I1001 20:34:51.957530       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f62159ef-15bf-4a2f-99b1-e8da4f3add22", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-878552_28b27df9-336d-4270-b7ee-fabafab5d940 became leader
	I1001 20:34:52.060007       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-878552_28b27df9-336d-4270-b7ee-fabafab5d940!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-878552 -n default-k8s-diff-port-878552
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-878552 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-75m4s
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-878552 describe pod metrics-server-6867b74b74-75m4s
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-878552 describe pod metrics-server-6867b74b74-75m4s: exit status 1 (87.734287ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-75m4s" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-878552 describe pod metrics-server-6867b74b74-75m4s: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (440.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-106982 -n embed-certs-106982
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-01 20:42:47.65547376 +0000 UTC m=+6518.604277148
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-106982 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-106982 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.28µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-106982 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-106982 -n embed-certs-106982
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-106982 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-106982 logs -n 25: (3.75461146s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-556200 | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:16 UTC |
	|         | disable-driver-mounts-556200                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:21 UTC |
	|         | default-k8s-diff-port-878552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-106982                                  | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-359369                              | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:17 UTC | 01 Oct 24 20:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-359369             | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:17 UTC | 01 Oct 24 20:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-359369                              | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-878552  | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:22 UTC | 01 Oct 24 20:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:22 UTC |                     |
	|         | default-k8s-diff-port-878552                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-878552       | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:24 UTC | 01 Oct 24 20:34 UTC |
	|         | default-k8s-diff-port-878552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-359369                              | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:40 UTC | 01 Oct 24 20:40 UTC |
	| start   | -p newest-cni-204654 --memory=2200 --alsologtostderr   | newest-cni-204654            | jenkins | v1.34.0 | 01 Oct 24 20:40 UTC | 01 Oct 24 20:41 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-204654             | newest-cni-204654            | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-204654                                   | newest-cni-204654            | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:41 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-204654                  | newest-cni-204654            | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:41 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-204654 --memory=2200 --alsologtostderr   | newest-cni-204654            | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:41 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-204654 image list                           | newest-cni-204654            | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:41 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-204654                                   | newest-cni-204654            | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:41 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-204654                                   | newest-cni-204654            | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:41 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-204654                                   | newest-cni-204654            | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:41 UTC |
	| delete  | -p newest-cni-204654                                   | newest-cni-204654            | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:41 UTC |
	| start   | -p auto-983557 --memory=3072                           | auto-983557                  | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p no-preload-262337                                   | no-preload-262337            | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:42 UTC |
	| start   | -p kindnet-983557                                      | kindnet-983557               | jenkins | v1.34.0 | 01 Oct 24 20:42 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 20:42:01
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 20:42:01.084597   74151 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:42:01.085201   74151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:42:01.085260   74151 out.go:358] Setting ErrFile to fd 2...
	I1001 20:42:01.085278   74151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:42:01.085725   74151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 20:42:01.086749   74151 out.go:352] Setting JSON to false
	I1001 20:42:01.087767   74151 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8663,"bootTime":1727806658,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 20:42:01.087875   74151 start.go:139] virtualization: kvm guest
	I1001 20:42:01.089480   74151 out.go:177] * [kindnet-983557] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 20:42:01.090535   74151 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 20:42:01.090538   74151 notify.go:220] Checking for updates...
	I1001 20:42:01.092461   74151 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 20:42:01.093578   74151 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:42:01.094498   74151 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:42:01.095406   74151 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 20:42:01.096308   74151 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 20:42:01.097633   74151 config.go:182] Loaded profile config "auto-983557": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:42:01.097746   74151 config.go:182] Loaded profile config "default-k8s-diff-port-878552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:42:01.097834   74151 config.go:182] Loaded profile config "embed-certs-106982": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:42:01.097936   74151 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 20:42:01.138520   74151 out.go:177] * Using the kvm2 driver based on user configuration
	I1001 20:42:01.139699   74151 start.go:297] selected driver: kvm2
	I1001 20:42:01.139719   74151 start.go:901] validating driver "kvm2" against <nil>
	I1001 20:42:01.139731   74151 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 20:42:01.140504   74151 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:42:01.140608   74151 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 20:42:01.157545   74151 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 20:42:01.157607   74151 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 20:42:01.157898   74151 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:42:01.157943   74151 cni.go:84] Creating CNI manager for "kindnet"
	I1001 20:42:01.157954   74151 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 20:42:01.158025   74151 start.go:340] cluster config:
	{Name:kindnet-983557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-983557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:42:01.158150   74151 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:42:01.160468   74151 out.go:177] * Starting "kindnet-983557" primary control-plane node in "kindnet-983557" cluster
	I1001 20:41:57.206237   73835 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 20:41:57.206438   73835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:41:57.206480   73835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:41:57.221502   73835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33485
	I1001 20:41:57.221960   73835 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:41:57.222542   73835 main.go:141] libmachine: Using API Version  1
	I1001 20:41:57.222571   73835 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:41:57.223024   73835 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:41:57.223270   73835 main.go:141] libmachine: (auto-983557) Calling .GetMachineName
	I1001 20:41:57.223485   73835 main.go:141] libmachine: (auto-983557) Calling .DriverName
	I1001 20:41:57.223772   73835 start.go:159] libmachine.API.Create for "auto-983557" (driver="kvm2")
	I1001 20:41:57.223800   73835 client.go:168] LocalClient.Create starting
	I1001 20:41:57.223834   73835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem
	I1001 20:41:57.223875   73835 main.go:141] libmachine: Decoding PEM data...
	I1001 20:41:57.223892   73835 main.go:141] libmachine: Parsing certificate...
	I1001 20:41:57.223938   73835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem
	I1001 20:41:57.223956   73835 main.go:141] libmachine: Decoding PEM data...
	I1001 20:41:57.223967   73835 main.go:141] libmachine: Parsing certificate...
	I1001 20:41:57.223983   73835 main.go:141] libmachine: Running pre-create checks...
	I1001 20:41:57.223995   73835 main.go:141] libmachine: (auto-983557) Calling .PreCreateCheck
	I1001 20:41:57.224500   73835 main.go:141] libmachine: (auto-983557) Calling .GetConfigRaw
	I1001 20:41:57.224952   73835 main.go:141] libmachine: Creating machine...
	I1001 20:41:57.224967   73835 main.go:141] libmachine: (auto-983557) Calling .Create
	I1001 20:41:57.225134   73835 main.go:141] libmachine: (auto-983557) Creating KVM machine...
	I1001 20:41:57.226530   73835 main.go:141] libmachine: (auto-983557) DBG | found existing default KVM network
	I1001 20:41:57.228026   73835 main.go:141] libmachine: (auto-983557) DBG | I1001 20:41:57.227804   73858 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:38:12:0e} reservation:<nil>}
	I1001 20:41:57.229119   73835 main.go:141] libmachine: (auto-983557) DBG | I1001 20:41:57.229014   73858 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:59:db:fe} reservation:<nil>}
	I1001 20:41:57.229777   73835 main.go:141] libmachine: (auto-983557) DBG | I1001 20:41:57.229702   73858 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:e7:60:87} reservation:<nil>}
	I1001 20:41:57.230837   73835 main.go:141] libmachine: (auto-983557) DBG | I1001 20:41:57.230764   73858 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a3840}
	I1001 20:41:57.230877   73835 main.go:141] libmachine: (auto-983557) DBG | created network xml: 
	I1001 20:41:57.230893   73835 main.go:141] libmachine: (auto-983557) DBG | <network>
	I1001 20:41:57.230901   73835 main.go:141] libmachine: (auto-983557) DBG |   <name>mk-auto-983557</name>
	I1001 20:41:57.230921   73835 main.go:141] libmachine: (auto-983557) DBG |   <dns enable='no'/>
	I1001 20:41:57.230932   73835 main.go:141] libmachine: (auto-983557) DBG |   
	I1001 20:41:57.230947   73835 main.go:141] libmachine: (auto-983557) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1001 20:41:57.230989   73835 main.go:141] libmachine: (auto-983557) DBG |     <dhcp>
	I1001 20:41:57.231016   73835 main.go:141] libmachine: (auto-983557) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1001 20:41:57.231038   73835 main.go:141] libmachine: (auto-983557) DBG |     </dhcp>
	I1001 20:41:57.231048   73835 main.go:141] libmachine: (auto-983557) DBG |   </ip>
	I1001 20:41:57.231054   73835 main.go:141] libmachine: (auto-983557) DBG |   
	I1001 20:41:57.231059   73835 main.go:141] libmachine: (auto-983557) DBG | </network>
	I1001 20:41:57.231066   73835 main.go:141] libmachine: (auto-983557) DBG | 
	I1001 20:41:57.236141   73835 main.go:141] libmachine: (auto-983557) DBG | trying to create private KVM network mk-auto-983557 192.168.72.0/24...
	I1001 20:41:57.315991   73835 main.go:141] libmachine: (auto-983557) DBG | private KVM network mk-auto-983557 192.168.72.0/24 created
	I1001 20:41:57.316045   73835 main.go:141] libmachine: (auto-983557) DBG | I1001 20:41:57.315967   73858 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:41:57.316068   73835 main.go:141] libmachine: (auto-983557) Setting up store path in /home/jenkins/minikube-integration/19736-11198/.minikube/machines/auto-983557 ...
	I1001 20:41:57.316120   73835 main.go:141] libmachine: (auto-983557) Building disk image from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 20:41:57.316149   73835 main.go:141] libmachine: (auto-983557) Downloading /home/jenkins/minikube-integration/19736-11198/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 20:41:57.560140   73835 main.go:141] libmachine: (auto-983557) DBG | I1001 20:41:57.559992   73858 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/auto-983557/id_rsa...
	I1001 20:41:57.648097   73835 main.go:141] libmachine: (auto-983557) DBG | I1001 20:41:57.647959   73858 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/auto-983557/auto-983557.rawdisk...
	I1001 20:41:57.648125   73835 main.go:141] libmachine: (auto-983557) DBG | Writing magic tar header
	I1001 20:41:57.648136   73835 main.go:141] libmachine: (auto-983557) DBG | Writing SSH key tar header
	I1001 20:41:57.648148   73835 main.go:141] libmachine: (auto-983557) DBG | I1001 20:41:57.648082   73858 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/auto-983557 ...
	I1001 20:41:57.648219   73835 main.go:141] libmachine: (auto-983557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/auto-983557
	I1001 20:41:57.648255   73835 main.go:141] libmachine: (auto-983557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines
	I1001 20:41:57.648270   73835 main.go:141] libmachine: (auto-983557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:41:57.648283   73835 main.go:141] libmachine: (auto-983557) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/auto-983557 (perms=drwx------)
	I1001 20:41:57.648304   73835 main.go:141] libmachine: (auto-983557) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines (perms=drwxr-xr-x)
	I1001 20:41:57.648317   73835 main.go:141] libmachine: (auto-983557) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube (perms=drwxr-xr-x)
	I1001 20:41:57.648327   73835 main.go:141] libmachine: (auto-983557) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198 (perms=drwxrwxr-x)
	I1001 20:41:57.648350   73835 main.go:141] libmachine: (auto-983557) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 20:41:57.648388   73835 main.go:141] libmachine: (auto-983557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198
	I1001 20:41:57.648402   73835 main.go:141] libmachine: (auto-983557) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 20:41:57.648424   73835 main.go:141] libmachine: (auto-983557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 20:41:57.648437   73835 main.go:141] libmachine: (auto-983557) DBG | Checking permissions on dir: /home/jenkins
	I1001 20:41:57.648444   73835 main.go:141] libmachine: (auto-983557) DBG | Checking permissions on dir: /home
	I1001 20:41:57.648458   73835 main.go:141] libmachine: (auto-983557) DBG | Skipping /home - not owner
	I1001 20:41:57.648468   73835 main.go:141] libmachine: (auto-983557) Creating domain...
	I1001 20:41:57.649821   73835 main.go:141] libmachine: (auto-983557) define libvirt domain using xml: 
	I1001 20:41:57.649846   73835 main.go:141] libmachine: (auto-983557) <domain type='kvm'>
	I1001 20:41:57.649860   73835 main.go:141] libmachine: (auto-983557)   <name>auto-983557</name>
	I1001 20:41:57.649867   73835 main.go:141] libmachine: (auto-983557)   <memory unit='MiB'>3072</memory>
	I1001 20:41:57.649906   73835 main.go:141] libmachine: (auto-983557)   <vcpu>2</vcpu>
	I1001 20:41:57.649916   73835 main.go:141] libmachine: (auto-983557)   <features>
	I1001 20:41:57.649933   73835 main.go:141] libmachine: (auto-983557)     <acpi/>
	I1001 20:41:57.649947   73835 main.go:141] libmachine: (auto-983557)     <apic/>
	I1001 20:41:57.649958   73835 main.go:141] libmachine: (auto-983557)     <pae/>
	I1001 20:41:57.649967   73835 main.go:141] libmachine: (auto-983557)     
	I1001 20:41:57.649976   73835 main.go:141] libmachine: (auto-983557)   </features>
	I1001 20:41:57.649987   73835 main.go:141] libmachine: (auto-983557)   <cpu mode='host-passthrough'>
	I1001 20:41:57.649997   73835 main.go:141] libmachine: (auto-983557)   
	I1001 20:41:57.650004   73835 main.go:141] libmachine: (auto-983557)   </cpu>
	I1001 20:41:57.650045   73835 main.go:141] libmachine: (auto-983557)   <os>
	I1001 20:41:57.650063   73835 main.go:141] libmachine: (auto-983557)     <type>hvm</type>
	I1001 20:41:57.650070   73835 main.go:141] libmachine: (auto-983557)     <boot dev='cdrom'/>
	I1001 20:41:57.650076   73835 main.go:141] libmachine: (auto-983557)     <boot dev='hd'/>
	I1001 20:41:57.650082   73835 main.go:141] libmachine: (auto-983557)     <bootmenu enable='no'/>
	I1001 20:41:57.650088   73835 main.go:141] libmachine: (auto-983557)   </os>
	I1001 20:41:57.650093   73835 main.go:141] libmachine: (auto-983557)   <devices>
	I1001 20:41:57.650099   73835 main.go:141] libmachine: (auto-983557)     <disk type='file' device='cdrom'>
	I1001 20:41:57.650119   73835 main.go:141] libmachine: (auto-983557)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/auto-983557/boot2docker.iso'/>
	I1001 20:41:57.650129   73835 main.go:141] libmachine: (auto-983557)       <target dev='hdc' bus='scsi'/>
	I1001 20:41:57.650156   73835 main.go:141] libmachine: (auto-983557)       <readonly/>
	I1001 20:41:57.650178   73835 main.go:141] libmachine: (auto-983557)     </disk>
	I1001 20:41:57.650190   73835 main.go:141] libmachine: (auto-983557)     <disk type='file' device='disk'>
	I1001 20:41:57.650214   73835 main.go:141] libmachine: (auto-983557)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 20:41:57.650227   73835 main.go:141] libmachine: (auto-983557)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/auto-983557/auto-983557.rawdisk'/>
	I1001 20:41:57.650236   73835 main.go:141] libmachine: (auto-983557)       <target dev='hda' bus='virtio'/>
	I1001 20:41:57.650247   73835 main.go:141] libmachine: (auto-983557)     </disk>
	I1001 20:41:57.650255   73835 main.go:141] libmachine: (auto-983557)     <interface type='network'>
	I1001 20:41:57.650265   73835 main.go:141] libmachine: (auto-983557)       <source network='mk-auto-983557'/>
	I1001 20:41:57.650274   73835 main.go:141] libmachine: (auto-983557)       <model type='virtio'/>
	I1001 20:41:57.650283   73835 main.go:141] libmachine: (auto-983557)     </interface>
	I1001 20:41:57.650291   73835 main.go:141] libmachine: (auto-983557)     <interface type='network'>
	I1001 20:41:57.650301   73835 main.go:141] libmachine: (auto-983557)       <source network='default'/>
	I1001 20:41:57.650308   73835 main.go:141] libmachine: (auto-983557)       <model type='virtio'/>
	I1001 20:41:57.650315   73835 main.go:141] libmachine: (auto-983557)     </interface>
	I1001 20:41:57.650321   73835 main.go:141] libmachine: (auto-983557)     <serial type='pty'>
	I1001 20:41:57.650331   73835 main.go:141] libmachine: (auto-983557)       <target port='0'/>
	I1001 20:41:57.650345   73835 main.go:141] libmachine: (auto-983557)     </serial>
	I1001 20:41:57.650362   73835 main.go:141] libmachine: (auto-983557)     <console type='pty'>
	I1001 20:41:57.650375   73835 main.go:141] libmachine: (auto-983557)       <target type='serial' port='0'/>
	I1001 20:41:57.650385   73835 main.go:141] libmachine: (auto-983557)     </console>
	I1001 20:41:57.650396   73835 main.go:141] libmachine: (auto-983557)     <rng model='virtio'>
	I1001 20:41:57.650408   73835 main.go:141] libmachine: (auto-983557)       <backend model='random'>/dev/random</backend>
	I1001 20:41:57.650427   73835 main.go:141] libmachine: (auto-983557)     </rng>
	I1001 20:41:57.650439   73835 main.go:141] libmachine: (auto-983557)     
	I1001 20:41:57.650447   73835 main.go:141] libmachine: (auto-983557)     
	I1001 20:41:57.650457   73835 main.go:141] libmachine: (auto-983557)   </devices>
	I1001 20:41:57.650468   73835 main.go:141] libmachine: (auto-983557) </domain>
	I1001 20:41:57.650478   73835 main.go:141] libmachine: (auto-983557) 
	I1001 20:41:57.654749   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:d5:c6:d4 in network default
	I1001 20:41:57.655683   73835 main.go:141] libmachine: (auto-983557) Ensuring networks are active...
	I1001 20:41:57.655704   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:41:57.656710   73835 main.go:141] libmachine: (auto-983557) Ensuring network default is active
	I1001 20:41:57.656973   73835 main.go:141] libmachine: (auto-983557) Ensuring network mk-auto-983557 is active
	I1001 20:41:57.657831   73835 main.go:141] libmachine: (auto-983557) Getting domain xml...
	I1001 20:41:57.658807   73835 main.go:141] libmachine: (auto-983557) Creating domain...
	I1001 20:41:59.004889   73835 main.go:141] libmachine: (auto-983557) Waiting to get IP...
	I1001 20:41:59.005716   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:41:59.006223   73835 main.go:141] libmachine: (auto-983557) DBG | unable to find current IP address of domain auto-983557 in network mk-auto-983557
	I1001 20:41:59.006253   73835 main.go:141] libmachine: (auto-983557) DBG | I1001 20:41:59.006178   73858 retry.go:31] will retry after 202.704688ms: waiting for machine to come up
	I1001 20:41:59.210527   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:41:59.210945   73835 main.go:141] libmachine: (auto-983557) DBG | unable to find current IP address of domain auto-983557 in network mk-auto-983557
	I1001 20:41:59.210991   73835 main.go:141] libmachine: (auto-983557) DBG | I1001 20:41:59.210905   73858 retry.go:31] will retry after 388.085015ms: waiting for machine to come up
	I1001 20:41:59.600298   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:41:59.600880   73835 main.go:141] libmachine: (auto-983557) DBG | unable to find current IP address of domain auto-983557 in network mk-auto-983557
	I1001 20:41:59.600907   73835 main.go:141] libmachine: (auto-983557) DBG | I1001 20:41:59.600842   73858 retry.go:31] will retry after 426.22305ms: waiting for machine to come up
	I1001 20:42:00.028425   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:00.028853   73835 main.go:141] libmachine: (auto-983557) DBG | unable to find current IP address of domain auto-983557 in network mk-auto-983557
	I1001 20:42:00.028884   73835 main.go:141] libmachine: (auto-983557) DBG | I1001 20:42:00.028829   73858 retry.go:31] will retry after 411.354538ms: waiting for machine to come up
	I1001 20:42:00.796437   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:00.796798   73835 main.go:141] libmachine: (auto-983557) DBG | unable to find current IP address of domain auto-983557 in network mk-auto-983557
	I1001 20:42:00.796819   73835 main.go:141] libmachine: (auto-983557) DBG | I1001 20:42:00.796763   73858 retry.go:31] will retry after 473.580637ms: waiting for machine to come up
	I1001 20:42:01.272268   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:01.272749   73835 main.go:141] libmachine: (auto-983557) DBG | unable to find current IP address of domain auto-983557 in network mk-auto-983557
	I1001 20:42:01.272800   73835 main.go:141] libmachine: (auto-983557) DBG | I1001 20:42:01.272703   73858 retry.go:31] will retry after 657.815379ms: waiting for machine to come up
	I1001 20:42:01.932132   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:01.932738   73835 main.go:141] libmachine: (auto-983557) DBG | unable to find current IP address of domain auto-983557 in network mk-auto-983557
	I1001 20:42:01.932765   73835 main.go:141] libmachine: (auto-983557) DBG | I1001 20:42:01.932705   73858 retry.go:31] will retry after 783.390468ms: waiting for machine to come up
	I1001 20:42:01.161711   74151 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 20:42:01.161778   74151 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 20:42:01.161788   74151 cache.go:56] Caching tarball of preloaded images
	I1001 20:42:01.161893   74151 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 20:42:01.161904   74151 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 20:42:01.162017   74151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kindnet-983557/config.json ...
	I1001 20:42:01.162048   74151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kindnet-983557/config.json: {Name:mk651eee1b5d2bf02225dde7aa4eb80b0275d468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:42:01.162210   74151 start.go:360] acquireMachinesLock for kindnet-983557: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 20:42:02.717393   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:02.718083   73835 main.go:141] libmachine: (auto-983557) DBG | unable to find current IP address of domain auto-983557 in network mk-auto-983557
	I1001 20:42:02.718107   73835 main.go:141] libmachine: (auto-983557) DBG | I1001 20:42:02.717979   73858 retry.go:31] will retry after 1.143951269s: waiting for machine to come up
	I1001 20:42:03.863370   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:03.863905   73835 main.go:141] libmachine: (auto-983557) DBG | unable to find current IP address of domain auto-983557 in network mk-auto-983557
	I1001 20:42:03.863921   73835 main.go:141] libmachine: (auto-983557) DBG | I1001 20:42:03.863863   73858 retry.go:31] will retry after 1.393916866s: waiting for machine to come up
	I1001 20:42:05.259349   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:05.259730   73835 main.go:141] libmachine: (auto-983557) DBG | unable to find current IP address of domain auto-983557 in network mk-auto-983557
	I1001 20:42:05.259755   73835 main.go:141] libmachine: (auto-983557) DBG | I1001 20:42:05.259680   73858 retry.go:31] will retry after 2.002642266s: waiting for machine to come up
	I1001 20:42:07.263885   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:07.264350   73835 main.go:141] libmachine: (auto-983557) DBG | unable to find current IP address of domain auto-983557 in network mk-auto-983557
	I1001 20:42:07.264399   73835 main.go:141] libmachine: (auto-983557) DBG | I1001 20:42:07.264315   73858 retry.go:31] will retry after 2.791035994s: waiting for machine to come up
	I1001 20:42:10.057598   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:10.058146   73835 main.go:141] libmachine: (auto-983557) DBG | unable to find current IP address of domain auto-983557 in network mk-auto-983557
	I1001 20:42:10.058165   73835 main.go:141] libmachine: (auto-983557) DBG | I1001 20:42:10.058112   73858 retry.go:31] will retry after 2.23245543s: waiting for machine to come up
	I1001 20:42:12.292061   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:12.292626   73835 main.go:141] libmachine: (auto-983557) DBG | unable to find current IP address of domain auto-983557 in network mk-auto-983557
	I1001 20:42:12.292653   73835 main.go:141] libmachine: (auto-983557) DBG | I1001 20:42:12.292578   73858 retry.go:31] will retry after 4.539799078s: waiting for machine to come up
	I1001 20:42:16.837345   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:16.837829   73835 main.go:141] libmachine: (auto-983557) Found IP for machine: 192.168.72.182
	I1001 20:42:16.837852   73835 main.go:141] libmachine: (auto-983557) Reserving static IP address...
	I1001 20:42:16.837866   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has current primary IP address 192.168.72.182 and MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:16.838258   73835 main.go:141] libmachine: (auto-983557) DBG | unable to find host DHCP lease matching {name: "auto-983557", mac: "52:54:00:8c:5e:b4", ip: "192.168.72.182"} in network mk-auto-983557
	I1001 20:42:16.924531   73835 main.go:141] libmachine: (auto-983557) Reserved static IP address: 192.168.72.182
	I1001 20:42:16.924559   73835 main.go:141] libmachine: (auto-983557) DBG | Getting to WaitForSSH function...
	I1001 20:42:16.924567   73835 main.go:141] libmachine: (auto-983557) Waiting for SSH to be available...
	I1001 20:42:16.927771   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:16.928239   73835 main.go:141] libmachine: (auto-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:5e:b4", ip: ""} in network mk-auto-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:42:11 +0000 UTC Type:0 Mac:52:54:00:8c:5e:b4 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8c:5e:b4}
	I1001 20:42:16.928260   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined IP address 192.168.72.182 and MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:16.928479   73835 main.go:141] libmachine: (auto-983557) DBG | Using SSH client type: external
	I1001 20:42:16.928509   73835 main.go:141] libmachine: (auto-983557) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/auto-983557/id_rsa (-rw-------)
	I1001 20:42:16.928536   73835 main.go:141] libmachine: (auto-983557) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/auto-983557/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 20:42:16.928549   73835 main.go:141] libmachine: (auto-983557) DBG | About to run SSH command:
	I1001 20:42:16.928560   73835 main.go:141] libmachine: (auto-983557) DBG | exit 0
	I1001 20:42:17.057516   73835 main.go:141] libmachine: (auto-983557) DBG | SSH cmd err, output: <nil>: 
	I1001 20:42:17.057784   73835 main.go:141] libmachine: (auto-983557) KVM machine creation complete!
	I1001 20:42:17.058142   73835 main.go:141] libmachine: (auto-983557) Calling .GetConfigRaw
	I1001 20:42:17.058685   73835 main.go:141] libmachine: (auto-983557) Calling .DriverName
	I1001 20:42:17.058919   73835 main.go:141] libmachine: (auto-983557) Calling .DriverName
	I1001 20:42:17.059154   73835 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 20:42:17.059169   73835 main.go:141] libmachine: (auto-983557) Calling .GetState
	I1001 20:42:17.061169   73835 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 20:42:17.061188   73835 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 20:42:17.061194   73835 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 20:42:17.061203   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHHostname
	I1001 20:42:17.063894   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:17.064300   73835 main.go:141] libmachine: (auto-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:5e:b4", ip: ""} in network mk-auto-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:42:11 +0000 UTC Type:0 Mac:52:54:00:8c:5e:b4 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:auto-983557 Clientid:01:52:54:00:8c:5e:b4}
	I1001 20:42:17.064343   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined IP address 192.168.72.182 and MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:17.064485   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHPort
	I1001 20:42:17.064672   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHKeyPath
	I1001 20:42:17.064856   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHKeyPath
	I1001 20:42:17.065021   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHUsername
	I1001 20:42:17.065204   73835 main.go:141] libmachine: Using SSH client type: native
	I1001 20:42:17.065384   73835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I1001 20:42:17.065395   73835 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 20:42:18.332910   74151 start.go:364] duration metric: took 17.170659532s to acquireMachinesLock for "kindnet-983557"
	I1001 20:42:18.332987   74151 start.go:93] Provisioning new machine with config: &{Name:kindnet-983557 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-983557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 20:42:18.333096   74151 start.go:125] createHost starting for "" (driver="kvm2")
	I1001 20:42:17.183554   73835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:42:17.183585   73835 main.go:141] libmachine: Detecting the provisioner...
	I1001 20:42:17.183597   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHHostname
	I1001 20:42:17.186647   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:17.187068   73835 main.go:141] libmachine: (auto-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:5e:b4", ip: ""} in network mk-auto-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:42:11 +0000 UTC Type:0 Mac:52:54:00:8c:5e:b4 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:auto-983557 Clientid:01:52:54:00:8c:5e:b4}
	I1001 20:42:17.187099   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined IP address 192.168.72.182 and MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:17.187260   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHPort
	I1001 20:42:17.187453   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHKeyPath
	I1001 20:42:17.187615   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHKeyPath
	I1001 20:42:17.187754   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHUsername
	I1001 20:42:17.187921   73835 main.go:141] libmachine: Using SSH client type: native
	I1001 20:42:17.188084   73835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I1001 20:42:17.188096   73835 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 20:42:17.308956   73835 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 20:42:17.309006   73835 main.go:141] libmachine: found compatible host: buildroot
	I1001 20:42:17.309013   73835 main.go:141] libmachine: Provisioning with buildroot...
	I1001 20:42:17.309020   73835 main.go:141] libmachine: (auto-983557) Calling .GetMachineName
	I1001 20:42:17.309260   73835 buildroot.go:166] provisioning hostname "auto-983557"
	I1001 20:42:17.309292   73835 main.go:141] libmachine: (auto-983557) Calling .GetMachineName
	I1001 20:42:17.309520   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHHostname
	I1001 20:42:17.312426   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:17.312962   73835 main.go:141] libmachine: (auto-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:5e:b4", ip: ""} in network mk-auto-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:42:11 +0000 UTC Type:0 Mac:52:54:00:8c:5e:b4 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:auto-983557 Clientid:01:52:54:00:8c:5e:b4}
	I1001 20:42:17.312987   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined IP address 192.168.72.182 and MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:17.313113   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHPort
	I1001 20:42:17.313304   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHKeyPath
	I1001 20:42:17.313489   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHKeyPath
	I1001 20:42:17.313649   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHUsername
	I1001 20:42:17.313826   73835 main.go:141] libmachine: Using SSH client type: native
	I1001 20:42:17.313984   73835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I1001 20:42:17.313996   73835 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-983557 && echo "auto-983557" | sudo tee /etc/hostname
	I1001 20:42:17.448015   73835 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-983557
	
	I1001 20:42:17.448043   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHHostname
	I1001 20:42:17.450946   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:17.451357   73835 main.go:141] libmachine: (auto-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:5e:b4", ip: ""} in network mk-auto-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:42:11 +0000 UTC Type:0 Mac:52:54:00:8c:5e:b4 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:auto-983557 Clientid:01:52:54:00:8c:5e:b4}
	I1001 20:42:17.451383   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined IP address 192.168.72.182 and MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:17.451589   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHPort
	I1001 20:42:17.451780   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHKeyPath
	I1001 20:42:17.451959   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHKeyPath
	I1001 20:42:17.452119   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHUsername
	I1001 20:42:17.452266   73835 main.go:141] libmachine: Using SSH client type: native
	I1001 20:42:17.452483   73835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I1001 20:42:17.452499   73835 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-983557' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-983557/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-983557' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 20:42:17.573473   73835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:42:17.573505   73835 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 20:42:17.573526   73835 buildroot.go:174] setting up certificates
	I1001 20:42:17.573539   73835 provision.go:84] configureAuth start
	I1001 20:42:17.573551   73835 main.go:141] libmachine: (auto-983557) Calling .GetMachineName
	I1001 20:42:17.573859   73835 main.go:141] libmachine: (auto-983557) Calling .GetIP
	I1001 20:42:17.576647   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:17.577055   73835 main.go:141] libmachine: (auto-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:5e:b4", ip: ""} in network mk-auto-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:42:11 +0000 UTC Type:0 Mac:52:54:00:8c:5e:b4 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:auto-983557 Clientid:01:52:54:00:8c:5e:b4}
	I1001 20:42:17.577083   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined IP address 192.168.72.182 and MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:17.577216   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHHostname
	I1001 20:42:17.579280   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:17.579558   73835 main.go:141] libmachine: (auto-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:5e:b4", ip: ""} in network mk-auto-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:42:11 +0000 UTC Type:0 Mac:52:54:00:8c:5e:b4 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:auto-983557 Clientid:01:52:54:00:8c:5e:b4}
	I1001 20:42:17.579592   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined IP address 192.168.72.182 and MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:17.579708   73835 provision.go:143] copyHostCerts
	I1001 20:42:17.579767   73835 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 20:42:17.579778   73835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 20:42:17.579843   73835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 20:42:17.579940   73835 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 20:42:17.579952   73835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 20:42:17.579978   73835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 20:42:17.580030   73835 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 20:42:17.580037   73835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 20:42:17.580067   73835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 20:42:17.580135   73835 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.auto-983557 san=[127.0.0.1 192.168.72.182 auto-983557 localhost minikube]
	I1001 20:42:17.650714   73835 provision.go:177] copyRemoteCerts
	I1001 20:42:17.650775   73835 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 20:42:17.650797   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHHostname
	I1001 20:42:17.654346   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:17.654709   73835 main.go:141] libmachine: (auto-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:5e:b4", ip: ""} in network mk-auto-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:42:11 +0000 UTC Type:0 Mac:52:54:00:8c:5e:b4 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:auto-983557 Clientid:01:52:54:00:8c:5e:b4}
	I1001 20:42:17.654766   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined IP address 192.168.72.182 and MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:17.654910   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHPort
	I1001 20:42:17.655103   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHKeyPath
	I1001 20:42:17.655253   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHUsername
	I1001 20:42:17.655400   73835 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/auto-983557/id_rsa Username:docker}
	I1001 20:42:17.743998   73835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 20:42:17.770051   73835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1001 20:42:17.801349   73835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 20:42:17.825094   73835 provision.go:87] duration metric: took 251.535587ms to configureAuth
	I1001 20:42:17.825130   73835 buildroot.go:189] setting minikube options for container-runtime
	I1001 20:42:17.825297   73835 config.go:182] Loaded profile config "auto-983557": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:42:17.825376   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHHostname
	I1001 20:42:17.827930   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:17.828296   73835 main.go:141] libmachine: (auto-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:5e:b4", ip: ""} in network mk-auto-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:42:11 +0000 UTC Type:0 Mac:52:54:00:8c:5e:b4 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:auto-983557 Clientid:01:52:54:00:8c:5e:b4}
	I1001 20:42:17.828326   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined IP address 192.168.72.182 and MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:17.828496   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHPort
	I1001 20:42:17.828749   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHKeyPath
	I1001 20:42:17.828957   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHKeyPath
	I1001 20:42:17.829096   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHUsername
	I1001 20:42:17.829221   73835 main.go:141] libmachine: Using SSH client type: native
	I1001 20:42:17.829408   73835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I1001 20:42:17.829429   73835 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 20:42:18.069270   73835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 20:42:18.069302   73835 main.go:141] libmachine: Checking connection to Docker...
	I1001 20:42:18.069311   73835 main.go:141] libmachine: (auto-983557) Calling .GetURL
	I1001 20:42:18.070698   73835 main.go:141] libmachine: (auto-983557) DBG | Using libvirt version 6000000
	I1001 20:42:18.073419   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:18.073853   73835 main.go:141] libmachine: (auto-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:5e:b4", ip: ""} in network mk-auto-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:42:11 +0000 UTC Type:0 Mac:52:54:00:8c:5e:b4 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:auto-983557 Clientid:01:52:54:00:8c:5e:b4}
	I1001 20:42:18.073881   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined IP address 192.168.72.182 and MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:18.074056   73835 main.go:141] libmachine: Docker is up and running!
	I1001 20:42:18.074074   73835 main.go:141] libmachine: Reticulating splines...
	I1001 20:42:18.074081   73835 client.go:171] duration metric: took 20.850275434s to LocalClient.Create
	I1001 20:42:18.074100   73835 start.go:167] duration metric: took 20.850329952s to libmachine.API.Create "auto-983557"
	I1001 20:42:18.074106   73835 start.go:293] postStartSetup for "auto-983557" (driver="kvm2")
	I1001 20:42:18.074115   73835 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 20:42:18.074131   73835 main.go:141] libmachine: (auto-983557) Calling .DriverName
	I1001 20:42:18.074357   73835 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 20:42:18.074380   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHHostname
	I1001 20:42:18.076663   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:18.077114   73835 main.go:141] libmachine: (auto-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:5e:b4", ip: ""} in network mk-auto-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:42:11 +0000 UTC Type:0 Mac:52:54:00:8c:5e:b4 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:auto-983557 Clientid:01:52:54:00:8c:5e:b4}
	I1001 20:42:18.077137   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined IP address 192.168.72.182 and MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:18.077325   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHPort
	I1001 20:42:18.077499   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHKeyPath
	I1001 20:42:18.077655   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHUsername
	I1001 20:42:18.077787   73835 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/auto-983557/id_rsa Username:docker}
	I1001 20:42:18.167943   73835 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 20:42:18.172034   73835 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 20:42:18.172060   73835 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 20:42:18.172113   73835 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 20:42:18.172190   73835 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 20:42:18.172276   73835 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 20:42:18.182086   73835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:42:18.207458   73835 start.go:296] duration metric: took 133.340432ms for postStartSetup
	I1001 20:42:18.207521   73835 main.go:141] libmachine: (auto-983557) Calling .GetConfigRaw
	I1001 20:42:18.208125   73835 main.go:141] libmachine: (auto-983557) Calling .GetIP
	I1001 20:42:18.211911   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:18.212280   73835 main.go:141] libmachine: (auto-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:5e:b4", ip: ""} in network mk-auto-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:42:11 +0000 UTC Type:0 Mac:52:54:00:8c:5e:b4 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:auto-983557 Clientid:01:52:54:00:8c:5e:b4}
	I1001 20:42:18.212316   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined IP address 192.168.72.182 and MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:18.212550   73835 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/config.json ...
	I1001 20:42:18.212742   73835 start.go:128] duration metric: took 21.008003448s to createHost
	I1001 20:42:18.212770   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHHostname
	I1001 20:42:18.215045   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:18.215379   73835 main.go:141] libmachine: (auto-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:5e:b4", ip: ""} in network mk-auto-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:42:11 +0000 UTC Type:0 Mac:52:54:00:8c:5e:b4 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:auto-983557 Clientid:01:52:54:00:8c:5e:b4}
	I1001 20:42:18.215411   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined IP address 192.168.72.182 and MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:18.215600   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHPort
	I1001 20:42:18.215813   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHKeyPath
	I1001 20:42:18.215989   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHKeyPath
	I1001 20:42:18.216186   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHUsername
	I1001 20:42:18.216439   73835 main.go:141] libmachine: Using SSH client type: native
	I1001 20:42:18.216641   73835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I1001 20:42:18.216677   73835 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 20:42:18.332755   73835 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727815338.305682707
	
	I1001 20:42:18.332775   73835 fix.go:216] guest clock: 1727815338.305682707
	I1001 20:42:18.332783   73835 fix.go:229] Guest: 2024-10-01 20:42:18.305682707 +0000 UTC Remote: 2024-10-01 20:42:18.212759467 +0000 UTC m=+21.123496138 (delta=92.92324ms)
	I1001 20:42:18.332808   73835 fix.go:200] guest clock delta is within tolerance: 92.92324ms
	I1001 20:42:18.332815   73835 start.go:83] releasing machines lock for "auto-983557", held for 21.12815158s
	I1001 20:42:18.332841   73835 main.go:141] libmachine: (auto-983557) Calling .DriverName
	I1001 20:42:18.333084   73835 main.go:141] libmachine: (auto-983557) Calling .GetIP
	I1001 20:42:18.336067   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:18.336481   73835 main.go:141] libmachine: (auto-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:5e:b4", ip: ""} in network mk-auto-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:42:11 +0000 UTC Type:0 Mac:52:54:00:8c:5e:b4 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:auto-983557 Clientid:01:52:54:00:8c:5e:b4}
	I1001 20:42:18.336507   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined IP address 192.168.72.182 and MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:18.336691   73835 main.go:141] libmachine: (auto-983557) Calling .DriverName
	I1001 20:42:18.337188   73835 main.go:141] libmachine: (auto-983557) Calling .DriverName
	I1001 20:42:18.337365   73835 main.go:141] libmachine: (auto-983557) Calling .DriverName
	I1001 20:42:18.337419   73835 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 20:42:18.337460   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHHostname
	I1001 20:42:18.337591   73835 ssh_runner.go:195] Run: cat /version.json
	I1001 20:42:18.337615   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHHostname
	I1001 20:42:18.340218   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:18.340459   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:18.340539   73835 main.go:141] libmachine: (auto-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:5e:b4", ip: ""} in network mk-auto-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:42:11 +0000 UTC Type:0 Mac:52:54:00:8c:5e:b4 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:auto-983557 Clientid:01:52:54:00:8c:5e:b4}
	I1001 20:42:18.340576   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined IP address 192.168.72.182 and MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:18.340733   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHPort
	I1001 20:42:18.340895   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHKeyPath
	I1001 20:42:18.340949   73835 main.go:141] libmachine: (auto-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:5e:b4", ip: ""} in network mk-auto-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:42:11 +0000 UTC Type:0 Mac:52:54:00:8c:5e:b4 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:auto-983557 Clientid:01:52:54:00:8c:5e:b4}
	I1001 20:42:18.340974   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined IP address 192.168.72.182 and MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:18.341084   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHUsername
	I1001 20:42:18.341157   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHPort
	I1001 20:42:18.341225   73835 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/auto-983557/id_rsa Username:docker}
	I1001 20:42:18.341317   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHKeyPath
	I1001 20:42:18.341452   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHUsername
	I1001 20:42:18.341601   73835 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/auto-983557/id_rsa Username:docker}
	I1001 20:42:18.429626   73835 ssh_runner.go:195] Run: systemctl --version
	I1001 20:42:18.469774   73835 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 20:42:18.628750   73835 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 20:42:18.634507   73835 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 20:42:18.634581   73835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 20:42:18.650902   73835 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 20:42:18.650927   73835 start.go:495] detecting cgroup driver to use...
	I1001 20:42:18.650989   73835 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 20:42:18.667677   73835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 20:42:18.682411   73835 docker.go:217] disabling cri-docker service (if available) ...
	I1001 20:42:18.682479   73835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 20:42:18.697710   73835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 20:42:18.712796   73835 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 20:42:18.833841   73835 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 20:42:19.000772   73835 docker.go:233] disabling docker service ...
	I1001 20:42:19.000846   73835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 20:42:19.028549   73835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 20:42:19.043492   73835 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 20:42:19.172661   73835 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 20:42:19.303042   73835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 20:42:19.319942   73835 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 20:42:19.338348   73835 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 20:42:19.338428   73835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:42:19.348814   73835 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 20:42:19.348895   73835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:42:19.359405   73835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:42:19.372023   73835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:42:19.383028   73835 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 20:42:19.394175   73835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:42:19.405417   73835 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:42:19.424703   73835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:42:19.435193   73835 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 20:42:19.445350   73835 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 20:42:19.445419   73835 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 20:42:19.460472   73835 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 20:42:19.470815   73835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:42:19.612223   73835 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 20:42:19.715800   73835 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 20:42:19.715879   73835 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 20:42:19.721204   73835 start.go:563] Will wait 60s for crictl version
	I1001 20:42:19.721269   73835 ssh_runner.go:195] Run: which crictl
	I1001 20:42:19.726116   73835 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 20:42:19.767736   73835 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 20:42:19.767830   73835 ssh_runner.go:195] Run: crio --version
	I1001 20:42:19.799023   73835 ssh_runner.go:195] Run: crio --version
	I1001 20:42:19.833359   73835 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 20:42:18.335076   74151 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 20:42:18.335276   74151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:42:18.335327   74151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:42:18.352298   74151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38533
	I1001 20:42:18.352801   74151 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:42:18.353398   74151 main.go:141] libmachine: Using API Version  1
	I1001 20:42:18.353451   74151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:42:18.353775   74151 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:42:18.353954   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetMachineName
	I1001 20:42:18.354122   74151 main.go:141] libmachine: (kindnet-983557) Calling .DriverName
	I1001 20:42:18.354288   74151 start.go:159] libmachine.API.Create for "kindnet-983557" (driver="kvm2")
	I1001 20:42:18.354320   74151 client.go:168] LocalClient.Create starting
	I1001 20:42:18.354368   74151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem
	I1001 20:42:18.354406   74151 main.go:141] libmachine: Decoding PEM data...
	I1001 20:42:18.354434   74151 main.go:141] libmachine: Parsing certificate...
	I1001 20:42:18.354499   74151 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem
	I1001 20:42:18.354534   74151 main.go:141] libmachine: Decoding PEM data...
	I1001 20:42:18.354554   74151 main.go:141] libmachine: Parsing certificate...
	I1001 20:42:18.354588   74151 main.go:141] libmachine: Running pre-create checks...
	I1001 20:42:18.354600   74151 main.go:141] libmachine: (kindnet-983557) Calling .PreCreateCheck
	I1001 20:42:18.354964   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetConfigRaw
	I1001 20:42:18.355311   74151 main.go:141] libmachine: Creating machine...
	I1001 20:42:18.355325   74151 main.go:141] libmachine: (kindnet-983557) Calling .Create
	I1001 20:42:18.355474   74151 main.go:141] libmachine: (kindnet-983557) Creating KVM machine...
	I1001 20:42:18.356931   74151 main.go:141] libmachine: (kindnet-983557) DBG | found existing default KVM network
	I1001 20:42:18.358282   74151 main.go:141] libmachine: (kindnet-983557) DBG | I1001 20:42:18.358114   74311 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:38:12:0e} reservation:<nil>}
	I1001 20:42:18.359058   74151 main.go:141] libmachine: (kindnet-983557) DBG | I1001 20:42:18.358984   74311 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:59:db:fe} reservation:<nil>}
	I1001 20:42:18.360108   74151 main.go:141] libmachine: (kindnet-983557) DBG | I1001 20:42:18.360014   74311 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000390760}
	I1001 20:42:18.360128   74151 main.go:141] libmachine: (kindnet-983557) DBG | created network xml: 
	I1001 20:42:18.360138   74151 main.go:141] libmachine: (kindnet-983557) DBG | <network>
	I1001 20:42:18.360145   74151 main.go:141] libmachine: (kindnet-983557) DBG |   <name>mk-kindnet-983557</name>
	I1001 20:42:18.360157   74151 main.go:141] libmachine: (kindnet-983557) DBG |   <dns enable='no'/>
	I1001 20:42:18.360165   74151 main.go:141] libmachine: (kindnet-983557) DBG |   
	I1001 20:42:18.360173   74151 main.go:141] libmachine: (kindnet-983557) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1001 20:42:18.360183   74151 main.go:141] libmachine: (kindnet-983557) DBG |     <dhcp>
	I1001 20:42:18.360193   74151 main.go:141] libmachine: (kindnet-983557) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1001 20:42:18.360208   74151 main.go:141] libmachine: (kindnet-983557) DBG |     </dhcp>
	I1001 20:42:18.360219   74151 main.go:141] libmachine: (kindnet-983557) DBG |   </ip>
	I1001 20:42:18.360227   74151 main.go:141] libmachine: (kindnet-983557) DBG |   
	I1001 20:42:18.360233   74151 main.go:141] libmachine: (kindnet-983557) DBG | </network>
	I1001 20:42:18.360245   74151 main.go:141] libmachine: (kindnet-983557) DBG | 
	I1001 20:42:18.365842   74151 main.go:141] libmachine: (kindnet-983557) DBG | trying to create private KVM network mk-kindnet-983557 192.168.61.0/24...
	I1001 20:42:18.447636   74151 main.go:141] libmachine: (kindnet-983557) DBG | private KVM network mk-kindnet-983557 192.168.61.0/24 created
	I1001 20:42:18.447666   74151 main.go:141] libmachine: (kindnet-983557) Setting up store path in /home/jenkins/minikube-integration/19736-11198/.minikube/machines/kindnet-983557 ...
	I1001 20:42:18.447681   74151 main.go:141] libmachine: (kindnet-983557) DBG | I1001 20:42:18.447606   74311 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:42:18.447710   74151 main.go:141] libmachine: (kindnet-983557) Building disk image from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 20:42:18.447843   74151 main.go:141] libmachine: (kindnet-983557) Downloading /home/jenkins/minikube-integration/19736-11198/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 20:42:18.726370   74151 main.go:141] libmachine: (kindnet-983557) DBG | I1001 20:42:18.726197   74311 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/kindnet-983557/id_rsa...
	I1001 20:42:18.867053   74151 main.go:141] libmachine: (kindnet-983557) DBG | I1001 20:42:18.866890   74311 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/kindnet-983557/kindnet-983557.rawdisk...
	I1001 20:42:18.867091   74151 main.go:141] libmachine: (kindnet-983557) DBG | Writing magic tar header
	I1001 20:42:18.867112   74151 main.go:141] libmachine: (kindnet-983557) DBG | Writing SSH key tar header
	I1001 20:42:18.867126   74151 main.go:141] libmachine: (kindnet-983557) DBG | I1001 20:42:18.867007   74311 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/kindnet-983557 ...
	I1001 20:42:18.867145   74151 main.go:141] libmachine: (kindnet-983557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/kindnet-983557
	I1001 20:42:18.867159   74151 main.go:141] libmachine: (kindnet-983557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines
	I1001 20:42:18.867172   74151 main.go:141] libmachine: (kindnet-983557) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/kindnet-983557 (perms=drwx------)
	I1001 20:42:18.867185   74151 main.go:141] libmachine: (kindnet-983557) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines (perms=drwxr-xr-x)
	I1001 20:42:18.867198   74151 main.go:141] libmachine: (kindnet-983557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:42:18.867213   74151 main.go:141] libmachine: (kindnet-983557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198
	I1001 20:42:18.867224   74151 main.go:141] libmachine: (kindnet-983557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 20:42:18.867238   74151 main.go:141] libmachine: (kindnet-983557) DBG | Checking permissions on dir: /home/jenkins
	I1001 20:42:18.867249   74151 main.go:141] libmachine: (kindnet-983557) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube (perms=drwxr-xr-x)
	I1001 20:42:18.867260   74151 main.go:141] libmachine: (kindnet-983557) DBG | Checking permissions on dir: /home
	I1001 20:42:18.867270   74151 main.go:141] libmachine: (kindnet-983557) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198 (perms=drwxrwxr-x)
	I1001 20:42:18.867278   74151 main.go:141] libmachine: (kindnet-983557) DBG | Skipping /home - not owner
	I1001 20:42:18.867294   74151 main.go:141] libmachine: (kindnet-983557) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 20:42:18.867306   74151 main.go:141] libmachine: (kindnet-983557) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 20:42:18.867319   74151 main.go:141] libmachine: (kindnet-983557) Creating domain...
	I1001 20:42:18.868540   74151 main.go:141] libmachine: (kindnet-983557) define libvirt domain using xml: 
	I1001 20:42:18.868564   74151 main.go:141] libmachine: (kindnet-983557) <domain type='kvm'>
	I1001 20:42:18.868574   74151 main.go:141] libmachine: (kindnet-983557)   <name>kindnet-983557</name>
	I1001 20:42:18.868582   74151 main.go:141] libmachine: (kindnet-983557)   <memory unit='MiB'>3072</memory>
	I1001 20:42:18.868595   74151 main.go:141] libmachine: (kindnet-983557)   <vcpu>2</vcpu>
	I1001 20:42:18.868602   74151 main.go:141] libmachine: (kindnet-983557)   <features>
	I1001 20:42:18.868611   74151 main.go:141] libmachine: (kindnet-983557)     <acpi/>
	I1001 20:42:18.868622   74151 main.go:141] libmachine: (kindnet-983557)     <apic/>
	I1001 20:42:18.868630   74151 main.go:141] libmachine: (kindnet-983557)     <pae/>
	I1001 20:42:18.868637   74151 main.go:141] libmachine: (kindnet-983557)     
	I1001 20:42:18.868642   74151 main.go:141] libmachine: (kindnet-983557)   </features>
	I1001 20:42:18.868648   74151 main.go:141] libmachine: (kindnet-983557)   <cpu mode='host-passthrough'>
	I1001 20:42:18.868655   74151 main.go:141] libmachine: (kindnet-983557)   
	I1001 20:42:18.868668   74151 main.go:141] libmachine: (kindnet-983557)   </cpu>
	I1001 20:42:18.868679   74151 main.go:141] libmachine: (kindnet-983557)   <os>
	I1001 20:42:18.868695   74151 main.go:141] libmachine: (kindnet-983557)     <type>hvm</type>
	I1001 20:42:18.868704   74151 main.go:141] libmachine: (kindnet-983557)     <boot dev='cdrom'/>
	I1001 20:42:18.868713   74151 main.go:141] libmachine: (kindnet-983557)     <boot dev='hd'/>
	I1001 20:42:18.868721   74151 main.go:141] libmachine: (kindnet-983557)     <bootmenu enable='no'/>
	I1001 20:42:18.868742   74151 main.go:141] libmachine: (kindnet-983557)   </os>
	I1001 20:42:18.868750   74151 main.go:141] libmachine: (kindnet-983557)   <devices>
	I1001 20:42:18.868754   74151 main.go:141] libmachine: (kindnet-983557)     <disk type='file' device='cdrom'>
	I1001 20:42:18.868769   74151 main.go:141] libmachine: (kindnet-983557)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/kindnet-983557/boot2docker.iso'/>
	I1001 20:42:18.868780   74151 main.go:141] libmachine: (kindnet-983557)       <target dev='hdc' bus='scsi'/>
	I1001 20:42:18.868788   74151 main.go:141] libmachine: (kindnet-983557)       <readonly/>
	I1001 20:42:18.868797   74151 main.go:141] libmachine: (kindnet-983557)     </disk>
	I1001 20:42:18.868806   74151 main.go:141] libmachine: (kindnet-983557)     <disk type='file' device='disk'>
	I1001 20:42:18.868817   74151 main.go:141] libmachine: (kindnet-983557)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 20:42:18.868832   74151 main.go:141] libmachine: (kindnet-983557)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/kindnet-983557/kindnet-983557.rawdisk'/>
	I1001 20:42:18.868841   74151 main.go:141] libmachine: (kindnet-983557)       <target dev='hda' bus='virtio'/>
	I1001 20:42:18.868846   74151 main.go:141] libmachine: (kindnet-983557)     </disk>
	I1001 20:42:18.868853   74151 main.go:141] libmachine: (kindnet-983557)     <interface type='network'>
	I1001 20:42:18.868866   74151 main.go:141] libmachine: (kindnet-983557)       <source network='mk-kindnet-983557'/>
	I1001 20:42:18.868876   74151 main.go:141] libmachine: (kindnet-983557)       <model type='virtio'/>
	I1001 20:42:18.868883   74151 main.go:141] libmachine: (kindnet-983557)     </interface>
	I1001 20:42:18.868893   74151 main.go:141] libmachine: (kindnet-983557)     <interface type='network'>
	I1001 20:42:18.868902   74151 main.go:141] libmachine: (kindnet-983557)       <source network='default'/>
	I1001 20:42:18.868912   74151 main.go:141] libmachine: (kindnet-983557)       <model type='virtio'/>
	I1001 20:42:18.868920   74151 main.go:141] libmachine: (kindnet-983557)     </interface>
	I1001 20:42:18.868927   74151 main.go:141] libmachine: (kindnet-983557)     <serial type='pty'>
	I1001 20:42:18.868933   74151 main.go:141] libmachine: (kindnet-983557)       <target port='0'/>
	I1001 20:42:18.868942   74151 main.go:141] libmachine: (kindnet-983557)     </serial>
	I1001 20:42:18.868949   74151 main.go:141] libmachine: (kindnet-983557)     <console type='pty'>
	I1001 20:42:18.868960   74151 main.go:141] libmachine: (kindnet-983557)       <target type='serial' port='0'/>
	I1001 20:42:18.868970   74151 main.go:141] libmachine: (kindnet-983557)     </console>
	I1001 20:42:18.868979   74151 main.go:141] libmachine: (kindnet-983557)     <rng model='virtio'>
	I1001 20:42:18.868991   74151 main.go:141] libmachine: (kindnet-983557)       <backend model='random'>/dev/random</backend>
	I1001 20:42:18.869005   74151 main.go:141] libmachine: (kindnet-983557)     </rng>
	I1001 20:42:18.869013   74151 main.go:141] libmachine: (kindnet-983557)     
	I1001 20:42:18.869018   74151 main.go:141] libmachine: (kindnet-983557)     
	I1001 20:42:18.869027   74151 main.go:141] libmachine: (kindnet-983557)   </devices>
	I1001 20:42:18.869034   74151 main.go:141] libmachine: (kindnet-983557) </domain>
	I1001 20:42:18.869050   74151 main.go:141] libmachine: (kindnet-983557) 
	I1001 20:42:18.874279   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:b6:9a:5a in network default
	I1001 20:42:18.875037   74151 main.go:141] libmachine: (kindnet-983557) Ensuring networks are active...
	I1001 20:42:18.875063   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:18.875930   74151 main.go:141] libmachine: (kindnet-983557) Ensuring network default is active
	I1001 20:42:18.876275   74151 main.go:141] libmachine: (kindnet-983557) Ensuring network mk-kindnet-983557 is active
	I1001 20:42:18.876955   74151 main.go:141] libmachine: (kindnet-983557) Getting domain xml...
	I1001 20:42:18.877933   74151 main.go:141] libmachine: (kindnet-983557) Creating domain...
	I1001 20:42:20.278648   74151 main.go:141] libmachine: (kindnet-983557) Waiting to get IP...
	I1001 20:42:20.279815   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:20.280220   74151 main.go:141] libmachine: (kindnet-983557) DBG | unable to find current IP address of domain kindnet-983557 in network mk-kindnet-983557
	I1001 20:42:20.280348   74151 main.go:141] libmachine: (kindnet-983557) DBG | I1001 20:42:20.280289   74311 retry.go:31] will retry after 236.206967ms: waiting for machine to come up
	I1001 20:42:20.518831   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:20.519587   74151 main.go:141] libmachine: (kindnet-983557) DBG | unable to find current IP address of domain kindnet-983557 in network mk-kindnet-983557
	I1001 20:42:20.519612   74151 main.go:141] libmachine: (kindnet-983557) DBG | I1001 20:42:20.519555   74311 retry.go:31] will retry after 274.951363ms: waiting for machine to come up
	I1001 20:42:20.796511   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:20.797174   74151 main.go:141] libmachine: (kindnet-983557) DBG | unable to find current IP address of domain kindnet-983557 in network mk-kindnet-983557
	I1001 20:42:20.797202   74151 main.go:141] libmachine: (kindnet-983557) DBG | I1001 20:42:20.797141   74311 retry.go:31] will retry after 403.058098ms: waiting for machine to come up
	I1001 20:42:19.834552   73835 main.go:141] libmachine: (auto-983557) Calling .GetIP
	I1001 20:42:19.837505   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:19.838041   73835 main.go:141] libmachine: (auto-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:5e:b4", ip: ""} in network mk-auto-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:42:11 +0000 UTC Type:0 Mac:52:54:00:8c:5e:b4 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:auto-983557 Clientid:01:52:54:00:8c:5e:b4}
	I1001 20:42:19.838068   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined IP address 192.168.72.182 and MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:19.838301   73835 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1001 20:42:19.842552   73835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:42:19.855124   73835 kubeadm.go:883] updating cluster {Name:auto-983557 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:auto-983557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 20:42:19.855236   73835 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 20:42:19.855287   73835 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:42:19.889264   73835 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1001 20:42:19.889332   73835 ssh_runner.go:195] Run: which lz4
	I1001 20:42:19.893456   73835 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 20:42:19.898334   73835 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 20:42:19.898386   73835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1001 20:42:21.307267   73835 crio.go:462] duration metric: took 1.413892753s to copy over tarball
	I1001 20:42:21.307353   73835 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 20:42:23.666770   73835 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.35938165s)
	I1001 20:42:23.666799   73835 crio.go:469] duration metric: took 2.35950252s to extract the tarball
	I1001 20:42:23.666806   73835 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 20:42:23.701962   73835 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:42:23.743821   73835 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 20:42:23.743847   73835 cache_images.go:84] Images are preloaded, skipping loading
	I1001 20:42:23.743856   73835 kubeadm.go:934] updating node { 192.168.72.182 8443 v1.31.1 crio true true} ...
	I1001 20:42:23.743992   73835 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-983557 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:auto-983557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 20:42:23.744064   73835 ssh_runner.go:195] Run: crio config
	I1001 20:42:23.795881   73835 cni.go:84] Creating CNI manager for ""
	I1001 20:42:23.795912   73835 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:42:23.795922   73835 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 20:42:23.795963   73835 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.182 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-983557 NodeName:auto-983557 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 20:42:23.796103   73835 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-983557"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 20:42:23.796165   73835 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 20:42:23.807276   73835 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 20:42:23.807357   73835 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 20:42:23.816274   73835 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1001 20:42:23.833987   73835 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 20:42:23.851072   73835 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I1001 20:42:23.868339   73835 ssh_runner.go:195] Run: grep 192.168.72.182	control-plane.minikube.internal$ /etc/hosts
	I1001 20:42:23.872927   73835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:42:23.885025   73835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:42:24.000338   73835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:42:24.018203   73835 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557 for IP: 192.168.72.182
	I1001 20:42:24.018244   73835 certs.go:194] generating shared ca certs ...
	I1001 20:42:24.018268   73835 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:42:24.018447   73835 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 20:42:24.018503   73835 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 20:42:24.018515   73835 certs.go:256] generating profile certs ...
	I1001 20:42:24.018586   73835 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/client.key
	I1001 20:42:24.018614   73835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/client.crt with IP's: []
	I1001 20:42:24.425891   73835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/client.crt ...
	I1001 20:42:24.425922   73835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/client.crt: {Name:mk0fe7e19b92a380fb389c56bf025c98e423a069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:42:24.426109   73835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/client.key ...
	I1001 20:42:24.426122   73835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/client.key: {Name:mk58c3a3f4fe66716b2892e5ebf599f2024e099b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:42:24.426219   73835 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/apiserver.key.9fe3472f
	I1001 20:42:24.426236   73835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/apiserver.crt.9fe3472f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.182]
	I1001 20:42:24.674561   73835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/apiserver.crt.9fe3472f ...
	I1001 20:42:24.674596   73835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/apiserver.crt.9fe3472f: {Name:mk65be04eb14b34430a71639c5e36aac86db923a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:42:24.674809   73835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/apiserver.key.9fe3472f ...
	I1001 20:42:24.674831   73835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/apiserver.key.9fe3472f: {Name:mk6849e95977dedeca3140af7e3fca665ce1ffab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:42:24.674952   73835 certs.go:381] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/apiserver.crt.9fe3472f -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/apiserver.crt
	I1001 20:42:24.675066   73835 certs.go:385] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/apiserver.key.9fe3472f -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/apiserver.key
	I1001 20:42:24.675166   73835 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/proxy-client.key
	I1001 20:42:24.675187   73835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/proxy-client.crt with IP's: []
	I1001 20:42:24.988981   73835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/proxy-client.crt ...
	I1001 20:42:24.989019   73835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/proxy-client.crt: {Name:mk46862f460fecbaf6a0cd7707807e1a8d39065b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:42:24.989246   73835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/proxy-client.key ...
	I1001 20:42:24.989264   73835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/proxy-client.key: {Name:mkfeb913c5b7a0765eeb0740740c3adcdc8a1a19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:42:24.989457   73835 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 20:42:24.989493   73835 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 20:42:24.989499   73835 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 20:42:24.989519   73835 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 20:42:24.989540   73835 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 20:42:24.989560   73835 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 20:42:24.989681   73835 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:42:24.990481   73835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 20:42:25.020687   73835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 20:42:25.045950   73835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 20:42:25.074768   73835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 20:42:25.102857   73835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1001 20:42:25.131654   73835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 20:42:25.158508   73835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 20:42:25.184487   73835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 20:42:25.210662   73835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 20:42:25.237452   73835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 20:42:25.264713   73835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 20:42:25.299811   73835 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 20:42:25.319177   73835 ssh_runner.go:195] Run: openssl version
	I1001 20:42:25.325293   73835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 20:42:25.339505   73835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:42:25.344246   73835 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:42:25.344308   73835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:42:25.350262   73835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 20:42:25.364093   73835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 20:42:25.375370   73835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 20:42:25.381775   73835 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 20:42:25.381844   73835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 20:42:25.387607   73835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 20:42:25.403031   73835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 20:42:25.418618   73835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 20:42:25.424530   73835 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 20:42:25.424588   73835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 20:42:25.432379   73835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 20:42:25.445861   73835 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 20:42:25.451527   73835 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 20:42:25.451599   73835 kubeadm.go:392] StartCluster: {Name:auto-983557 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:auto-983557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:42:25.451694   73835 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 20:42:25.451793   73835 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 20:42:25.493266   73835 cri.go:89] found id: ""
	I1001 20:42:25.493349   73835 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 20:42:25.505064   73835 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:42:25.514746   73835 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:42:25.524394   73835 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:42:25.524418   73835 kubeadm.go:157] found existing configuration files:
	
	I1001 20:42:25.524474   73835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:42:25.534544   73835 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:42:25.534615   73835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:42:25.545463   73835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:42:25.554308   73835 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:42:25.554376   73835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:42:25.564760   73835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:42:25.573738   73835 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:42:25.573808   73835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:42:25.583997   73835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:42:25.598815   73835 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:42:25.598884   73835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:42:25.622572   73835 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:42:25.692213   73835 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 20:42:25.692301   73835 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:42:25.809318   73835 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:42:25.809478   73835 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:42:25.809644   73835 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 20:42:25.819055   73835 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:42:21.201500   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:21.202136   74151 main.go:141] libmachine: (kindnet-983557) DBG | unable to find current IP address of domain kindnet-983557 in network mk-kindnet-983557
	I1001 20:42:21.202167   74151 main.go:141] libmachine: (kindnet-983557) DBG | I1001 20:42:21.202098   74311 retry.go:31] will retry after 507.055007ms: waiting for machine to come up
	I1001 20:42:21.710815   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:21.711324   74151 main.go:141] libmachine: (kindnet-983557) DBG | unable to find current IP address of domain kindnet-983557 in network mk-kindnet-983557
	I1001 20:42:21.711350   74151 main.go:141] libmachine: (kindnet-983557) DBG | I1001 20:42:21.711286   74311 retry.go:31] will retry after 563.545678ms: waiting for machine to come up
	I1001 20:42:22.276229   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:22.276880   74151 main.go:141] libmachine: (kindnet-983557) DBG | unable to find current IP address of domain kindnet-983557 in network mk-kindnet-983557
	I1001 20:42:22.276923   74151 main.go:141] libmachine: (kindnet-983557) DBG | I1001 20:42:22.276839   74311 retry.go:31] will retry after 784.707367ms: waiting for machine to come up
	I1001 20:42:23.062694   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:23.063313   74151 main.go:141] libmachine: (kindnet-983557) DBG | unable to find current IP address of domain kindnet-983557 in network mk-kindnet-983557
	I1001 20:42:23.063345   74151 main.go:141] libmachine: (kindnet-983557) DBG | I1001 20:42:23.063287   74311 retry.go:31] will retry after 1.048345971s: waiting for machine to come up
	I1001 20:42:24.112773   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:24.113337   74151 main.go:141] libmachine: (kindnet-983557) DBG | unable to find current IP address of domain kindnet-983557 in network mk-kindnet-983557
	I1001 20:42:24.113367   74151 main.go:141] libmachine: (kindnet-983557) DBG | I1001 20:42:24.113292   74311 retry.go:31] will retry after 1.143386931s: waiting for machine to come up
	I1001 20:42:25.259045   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:25.259559   74151 main.go:141] libmachine: (kindnet-983557) DBG | unable to find current IP address of domain kindnet-983557 in network mk-kindnet-983557
	I1001 20:42:25.259587   74151 main.go:141] libmachine: (kindnet-983557) DBG | I1001 20:42:25.259504   74311 retry.go:31] will retry after 1.637090185s: waiting for machine to come up
	I1001 20:42:26.025974   73835 out.go:235]   - Generating certificates and keys ...
	I1001 20:42:26.026137   73835 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:42:26.026262   73835 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:42:26.026377   73835 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 20:42:26.107176   73835 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 20:42:26.266862   73835 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 20:42:26.425400   73835 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 20:42:26.550829   73835 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 20:42:26.551010   73835 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-983557 localhost] and IPs [192.168.72.182 127.0.0.1 ::1]
	I1001 20:42:26.732923   73835 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 20:42:26.733137   73835 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-983557 localhost] and IPs [192.168.72.182 127.0.0.1 ::1]
	I1001 20:42:26.985299   73835 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 20:42:27.095939   73835 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 20:42:27.351015   73835 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 20:42:27.351144   73835 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:42:27.495908   73835 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:42:27.809357   73835 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 20:42:27.940151   73835 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:42:27.994996   73835 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:42:28.152574   73835 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:42:28.153117   73835 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:42:28.155779   73835 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:42:26.898159   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:26.898681   74151 main.go:141] libmachine: (kindnet-983557) DBG | unable to find current IP address of domain kindnet-983557 in network mk-kindnet-983557
	I1001 20:42:26.898709   74151 main.go:141] libmachine: (kindnet-983557) DBG | I1001 20:42:26.898629   74311 retry.go:31] will retry after 1.7934264s: waiting for machine to come up
	I1001 20:42:28.694041   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:28.694594   74151 main.go:141] libmachine: (kindnet-983557) DBG | unable to find current IP address of domain kindnet-983557 in network mk-kindnet-983557
	I1001 20:42:28.694625   74151 main.go:141] libmachine: (kindnet-983557) DBG | I1001 20:42:28.694531   74311 retry.go:31] will retry after 1.964097068s: waiting for machine to come up
	I1001 20:42:30.661599   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:30.662112   74151 main.go:141] libmachine: (kindnet-983557) DBG | unable to find current IP address of domain kindnet-983557 in network mk-kindnet-983557
	I1001 20:42:30.662148   74151 main.go:141] libmachine: (kindnet-983557) DBG | I1001 20:42:30.662071   74311 retry.go:31] will retry after 2.930878286s: waiting for machine to come up
	I1001 20:42:28.157604   73835 out.go:235]   - Booting up control plane ...
	I1001 20:42:28.157741   73835 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:42:28.157847   73835 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:42:28.157964   73835 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:42:28.194856   73835 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:42:28.203667   73835 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:42:28.203765   73835 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:42:28.338345   73835 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 20:42:28.338503   73835 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 20:42:28.840035   73835 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.115378ms
	I1001 20:42:28.840146   73835 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 20:42:34.338291   73835 kubeadm.go:310] [api-check] The API server is healthy after 5.501924375s
	I1001 20:42:34.354524   73835 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 20:42:34.368572   73835 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 20:42:34.408968   73835 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 20:42:34.409184   73835 kubeadm.go:310] [mark-control-plane] Marking the node auto-983557 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 20:42:34.426317   73835 kubeadm.go:310] [bootstrap-token] Using token: a1j4jy.qkko920kqxkeybeh
	I1001 20:42:34.427403   73835 out.go:235]   - Configuring RBAC rules ...
	I1001 20:42:34.427552   73835 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 20:42:34.435241   73835 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 20:42:34.448946   73835 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 20:42:34.453022   73835 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 20:42:34.465379   73835 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 20:42:34.473211   73835 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 20:42:34.748047   73835 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 20:42:35.194797   73835 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 20:42:35.747309   73835 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 20:42:35.747624   73835 kubeadm.go:310] 
	I1001 20:42:35.747735   73835 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 20:42:35.747755   73835 kubeadm.go:310] 
	I1001 20:42:35.747863   73835 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 20:42:35.747878   73835 kubeadm.go:310] 
	I1001 20:42:35.747914   73835 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 20:42:35.747996   73835 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 20:42:35.748070   73835 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 20:42:35.748079   73835 kubeadm.go:310] 
	I1001 20:42:35.748132   73835 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 20:42:35.748142   73835 kubeadm.go:310] 
	I1001 20:42:35.748204   73835 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 20:42:35.748216   73835 kubeadm.go:310] 
	I1001 20:42:35.748323   73835 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 20:42:35.748450   73835 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 20:42:35.748544   73835 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 20:42:35.748589   73835 kubeadm.go:310] 
	I1001 20:42:35.748710   73835 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 20:42:35.748819   73835 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 20:42:35.748831   73835 kubeadm.go:310] 
	I1001 20:42:35.748947   73835 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a1j4jy.qkko920kqxkeybeh \
	I1001 20:42:35.749112   73835 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 \
	I1001 20:42:35.749151   73835 kubeadm.go:310] 	--control-plane 
	I1001 20:42:35.749165   73835 kubeadm.go:310] 
	I1001 20:42:35.749266   73835 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 20:42:35.749275   73835 kubeadm.go:310] 
	I1001 20:42:35.749378   73835 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a1j4jy.qkko920kqxkeybeh \
	I1001 20:42:35.749526   73835 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 
	I1001 20:42:35.750354   73835 kubeadm.go:310] W1001 20:42:25.669832     835 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:42:35.750712   73835 kubeadm.go:310] W1001 20:42:25.670596     835 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:42:35.750869   73835 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:42:35.750907   73835 cni.go:84] Creating CNI manager for ""
	I1001 20:42:35.750918   73835 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:42:35.752495   73835 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 20:42:33.594598   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:33.595097   74151 main.go:141] libmachine: (kindnet-983557) DBG | unable to find current IP address of domain kindnet-983557 in network mk-kindnet-983557
	I1001 20:42:33.595124   74151 main.go:141] libmachine: (kindnet-983557) DBG | I1001 20:42:33.595044   74311 retry.go:31] will retry after 3.076984728s: waiting for machine to come up
	I1001 20:42:35.753696   73835 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 20:42:35.764043   73835 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 20:42:35.792288   73835 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 20:42:35.792388   73835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:42:35.792390   73835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-983557 minikube.k8s.io/updated_at=2024_10_01T20_42_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=auto-983557 minikube.k8s.io/primary=true
	I1001 20:42:35.961763   73835 ops.go:34] apiserver oom_adj: -16
	I1001 20:42:35.961839   73835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:42:36.462030   73835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:42:36.962846   73835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:42:37.462299   73835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:42:37.962857   73835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:42:38.462692   73835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:42:38.961956   73835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:42:39.462669   73835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:42:39.962552   73835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:42:40.053733   73835 kubeadm.go:1113] duration metric: took 4.261429229s to wait for elevateKubeSystemPrivileges
	I1001 20:42:40.053770   73835 kubeadm.go:394] duration metric: took 14.602175245s to StartCluster
	I1001 20:42:40.053793   73835 settings.go:142] acquiring lock: {Name:mkeb3fe64ed992373f048aef24eb0d675bcab60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:42:40.053868   73835 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:42:40.055872   73835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/kubeconfig: {Name:mk7ef19d546928001abf2478a3abe1d17765a591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:42:40.056110   73835 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 20:42:40.056185   73835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 20:42:40.056222   73835 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 20:42:40.056375   73835 addons.go:69] Setting storage-provisioner=true in profile "auto-983557"
	I1001 20:42:40.056394   73835 addons.go:69] Setting default-storageclass=true in profile "auto-983557"
	I1001 20:42:40.056402   73835 addons.go:234] Setting addon storage-provisioner=true in "auto-983557"
	I1001 20:42:40.056412   73835 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-983557"
	I1001 20:42:40.056450   73835 host.go:66] Checking if "auto-983557" exists ...
	I1001 20:42:40.056459   73835 config.go:182] Loaded profile config "auto-983557": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:42:40.056886   73835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:42:40.056923   73835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:42:40.057009   73835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:42:40.057068   73835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:42:40.057611   73835 out.go:177] * Verifying Kubernetes components...
	I1001 20:42:40.058969   73835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:42:40.073094   73835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43949
	I1001 20:42:40.073100   73835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42391
	I1001 20:42:40.073634   73835 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:42:40.073680   73835 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:42:40.074152   73835 main.go:141] libmachine: Using API Version  1
	I1001 20:42:40.074172   73835 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:42:40.074284   73835 main.go:141] libmachine: Using API Version  1
	I1001 20:42:40.074304   73835 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:42:40.074519   73835 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:42:40.074590   73835 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:42:40.075150   73835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:42:40.075179   73835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:42:40.075376   73835 main.go:141] libmachine: (auto-983557) Calling .GetState
	I1001 20:42:40.078590   73835 addons.go:234] Setting addon default-storageclass=true in "auto-983557"
	I1001 20:42:40.078621   73835 host.go:66] Checking if "auto-983557" exists ...
	I1001 20:42:40.078851   73835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:42:40.078879   73835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:42:40.091614   73835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35627
	I1001 20:42:40.092054   73835 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:42:40.092586   73835 main.go:141] libmachine: Using API Version  1
	I1001 20:42:40.092618   73835 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:42:40.092976   73835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43369
	I1001 20:42:40.093004   73835 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:42:40.093169   73835 main.go:141] libmachine: (auto-983557) Calling .GetState
	I1001 20:42:40.093436   73835 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:42:40.093876   73835 main.go:141] libmachine: Using API Version  1
	I1001 20:42:40.093901   73835 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:42:40.094475   73835 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:42:40.095180   73835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:42:40.095234   73835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:42:40.095253   73835 main.go:141] libmachine: (auto-983557) Calling .DriverName
	I1001 20:42:40.097001   73835 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:42:36.675456   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:36.675933   74151 main.go:141] libmachine: (kindnet-983557) DBG | unable to find current IP address of domain kindnet-983557 in network mk-kindnet-983557
	I1001 20:42:36.675960   74151 main.go:141] libmachine: (kindnet-983557) DBG | I1001 20:42:36.675869   74311 retry.go:31] will retry after 4.52237466s: waiting for machine to come up
	I1001 20:42:40.098431   73835 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:42:40.098448   73835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 20:42:40.098467   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHHostname
	I1001 20:42:40.101769   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:40.102312   73835 main.go:141] libmachine: (auto-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:5e:b4", ip: ""} in network mk-auto-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:42:11 +0000 UTC Type:0 Mac:52:54:00:8c:5e:b4 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:auto-983557 Clientid:01:52:54:00:8c:5e:b4}
	I1001 20:42:40.102338   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined IP address 192.168.72.182 and MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:40.102524   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHPort
	I1001 20:42:40.102759   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHKeyPath
	I1001 20:42:40.102934   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHUsername
	I1001 20:42:40.103093   73835 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/auto-983557/id_rsa Username:docker}
	I1001 20:42:40.112192   73835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35727
	I1001 20:42:40.112708   73835 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:42:40.113191   73835 main.go:141] libmachine: Using API Version  1
	I1001 20:42:40.113212   73835 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:42:40.113523   73835 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:42:40.113784   73835 main.go:141] libmachine: (auto-983557) Calling .GetState
	I1001 20:42:40.115686   73835 main.go:141] libmachine: (auto-983557) Calling .DriverName
	I1001 20:42:40.116145   73835 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 20:42:40.116177   73835 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 20:42:40.116201   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHHostname
	I1001 20:42:40.119395   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:40.119884   73835 main.go:141] libmachine: (auto-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:5e:b4", ip: ""} in network mk-auto-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:42:11 +0000 UTC Type:0 Mac:52:54:00:8c:5e:b4 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:auto-983557 Clientid:01:52:54:00:8c:5e:b4}
	I1001 20:42:40.119907   73835 main.go:141] libmachine: (auto-983557) DBG | domain auto-983557 has defined IP address 192.168.72.182 and MAC address 52:54:00:8c:5e:b4 in network mk-auto-983557
	I1001 20:42:40.120228   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHPort
	I1001 20:42:40.120433   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHKeyPath
	I1001 20:42:40.120580   73835 main.go:141] libmachine: (auto-983557) Calling .GetSSHUsername
	I1001 20:42:40.120705   73835 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/auto-983557/id_rsa Username:docker}
	I1001 20:42:40.333066   73835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:42:40.333589   73835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 20:42:40.412152   73835 node_ready.go:35] waiting up to 15m0s for node "auto-983557" to be "Ready" ...
	I1001 20:42:40.421341   73835 node_ready.go:49] node "auto-983557" has status "Ready":"True"
	I1001 20:42:40.421369   73835 node_ready.go:38] duration metric: took 9.186571ms for node "auto-983557" to be "Ready" ...
	I1001 20:42:40.421378   73835 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:42:40.430996   73835 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-99krt" in "kube-system" namespace to be "Ready" ...
	I1001 20:42:40.494660   73835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 20:42:40.531904   73835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:42:40.947732   73835 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1001 20:42:40.947767   73835 main.go:141] libmachine: Making call to close driver server
	I1001 20:42:40.947928   73835 main.go:141] libmachine: (auto-983557) Calling .Close
	I1001 20:42:40.948209   73835 main.go:141] libmachine: (auto-983557) DBG | Closing plugin on server side
	I1001 20:42:40.948215   73835 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:42:40.948273   73835 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:42:40.948290   73835 main.go:141] libmachine: Making call to close driver server
	I1001 20:42:40.948298   73835 main.go:141] libmachine: (auto-983557) Calling .Close
	I1001 20:42:40.949836   73835 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:42:40.949857   73835 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:42:40.962401   73835 main.go:141] libmachine: Making call to close driver server
	I1001 20:42:40.962429   73835 main.go:141] libmachine: (auto-983557) Calling .Close
	I1001 20:42:40.962718   73835 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:42:40.962734   73835 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:42:40.962760   73835 main.go:141] libmachine: (auto-983557) DBG | Closing plugin on server side
	I1001 20:42:41.455137   73835 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-983557" context rescaled to 1 replicas
	I1001 20:42:41.496735   73835 main.go:141] libmachine: Making call to close driver server
	I1001 20:42:41.496763   73835 main.go:141] libmachine: (auto-983557) Calling .Close
	I1001 20:42:41.497041   73835 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:42:41.497095   73835 main.go:141] libmachine: (auto-983557) DBG | Closing plugin on server side
	I1001 20:42:41.497122   73835 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:42:41.497134   73835 main.go:141] libmachine: Making call to close driver server
	I1001 20:42:41.497142   73835 main.go:141] libmachine: (auto-983557) Calling .Close
	I1001 20:42:41.497390   73835 main.go:141] libmachine: (auto-983557) DBG | Closing plugin on server side
	I1001 20:42:41.497422   73835 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:42:41.497439   73835 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:42:41.499254   73835 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1001 20:42:41.500439   73835 addons.go:510] duration metric: took 1.444221815s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1001 20:42:41.201771   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:41.202311   74151 main.go:141] libmachine: (kindnet-983557) Found IP for machine: 192.168.61.181
	I1001 20:42:41.202332   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has current primary IP address 192.168.61.181 and MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:41.202339   74151 main.go:141] libmachine: (kindnet-983557) Reserving static IP address...
	I1001 20:42:41.202856   74151 main.go:141] libmachine: (kindnet-983557) DBG | unable to find host DHCP lease matching {name: "kindnet-983557", mac: "52:54:00:9d:4c:98", ip: "192.168.61.181"} in network mk-kindnet-983557
	I1001 20:42:41.293553   74151 main.go:141] libmachine: (kindnet-983557) DBG | Getting to WaitForSSH function...
	I1001 20:42:41.293587   74151 main.go:141] libmachine: (kindnet-983557) Reserved static IP address: 192.168.61.181
	I1001 20:42:41.293601   74151 main.go:141] libmachine: (kindnet-983557) Waiting for SSH to be available...
	I1001 20:42:41.296751   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:41.297218   74151 main.go:141] libmachine: (kindnet-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:4c:98", ip: ""} in network mk-kindnet-983557: {Iface:virbr3 ExpiryTime:2024-10-01 21:42:33 +0000 UTC Type:0 Mac:52:54:00:9d:4c:98 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9d:4c:98}
	I1001 20:42:41.297243   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined IP address 192.168.61.181 and MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:41.297501   74151 main.go:141] libmachine: (kindnet-983557) DBG | Using SSH client type: external
	I1001 20:42:41.297521   74151 main.go:141] libmachine: (kindnet-983557) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/kindnet-983557/id_rsa (-rw-------)
	I1001 20:42:41.297557   74151 main.go:141] libmachine: (kindnet-983557) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.181 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/kindnet-983557/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 20:42:41.297571   74151 main.go:141] libmachine: (kindnet-983557) DBG | About to run SSH command:
	I1001 20:42:41.297583   74151 main.go:141] libmachine: (kindnet-983557) DBG | exit 0
	I1001 20:42:41.429299   74151 main.go:141] libmachine: (kindnet-983557) DBG | SSH cmd err, output: <nil>: 
	I1001 20:42:41.429718   74151 main.go:141] libmachine: (kindnet-983557) KVM machine creation complete!
	I1001 20:42:41.430409   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetConfigRaw
	I1001 20:42:41.431104   74151 main.go:141] libmachine: (kindnet-983557) Calling .DriverName
	I1001 20:42:41.431369   74151 main.go:141] libmachine: (kindnet-983557) Calling .DriverName
	I1001 20:42:41.431594   74151 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 20:42:41.431614   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetState
	I1001 20:42:41.433295   74151 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 20:42:41.433314   74151 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 20:42:41.433322   74151 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 20:42:41.433348   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHHostname
	I1001 20:42:41.436276   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:41.436696   74151 main.go:141] libmachine: (kindnet-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:4c:98", ip: ""} in network mk-kindnet-983557: {Iface:virbr3 ExpiryTime:2024-10-01 21:42:33 +0000 UTC Type:0 Mac:52:54:00:9d:4c:98 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:kindnet-983557 Clientid:01:52:54:00:9d:4c:98}
	I1001 20:42:41.436723   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined IP address 192.168.61.181 and MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:41.436912   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHPort
	I1001 20:42:41.437118   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHKeyPath
	I1001 20:42:41.437345   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHKeyPath
	I1001 20:42:41.437512   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHUsername
	I1001 20:42:41.437746   74151 main.go:141] libmachine: Using SSH client type: native
	I1001 20:42:41.438010   74151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I1001 20:42:41.438033   74151 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 20:42:41.547806   74151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:42:41.547830   74151 main.go:141] libmachine: Detecting the provisioner...
	I1001 20:42:41.547839   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHHostname
	I1001 20:42:41.551271   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:41.551774   74151 main.go:141] libmachine: (kindnet-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:4c:98", ip: ""} in network mk-kindnet-983557: {Iface:virbr3 ExpiryTime:2024-10-01 21:42:33 +0000 UTC Type:0 Mac:52:54:00:9d:4c:98 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:kindnet-983557 Clientid:01:52:54:00:9d:4c:98}
	I1001 20:42:41.551803   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined IP address 192.168.61.181 and MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:41.552052   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHPort
	I1001 20:42:41.552280   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHKeyPath
	I1001 20:42:41.552528   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHKeyPath
	I1001 20:42:41.552687   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHUsername
	I1001 20:42:41.552877   74151 main.go:141] libmachine: Using SSH client type: native
	I1001 20:42:41.553119   74151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I1001 20:42:41.553140   74151 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 20:42:41.661008   74151 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 20:42:41.661101   74151 main.go:141] libmachine: found compatible host: buildroot
	I1001 20:42:41.661116   74151 main.go:141] libmachine: Provisioning with buildroot...
	I1001 20:42:41.661130   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetMachineName
	I1001 20:42:41.661393   74151 buildroot.go:166] provisioning hostname "kindnet-983557"
	I1001 20:42:41.661419   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetMachineName
	I1001 20:42:41.661644   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHHostname
	I1001 20:42:41.664977   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:41.665397   74151 main.go:141] libmachine: (kindnet-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:4c:98", ip: ""} in network mk-kindnet-983557: {Iface:virbr3 ExpiryTime:2024-10-01 21:42:33 +0000 UTC Type:0 Mac:52:54:00:9d:4c:98 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:kindnet-983557 Clientid:01:52:54:00:9d:4c:98}
	I1001 20:42:41.665438   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined IP address 192.168.61.181 and MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:41.665669   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHPort
	I1001 20:42:41.665899   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHKeyPath
	I1001 20:42:41.666057   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHKeyPath
	I1001 20:42:41.666230   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHUsername
	I1001 20:42:41.666382   74151 main.go:141] libmachine: Using SSH client type: native
	I1001 20:42:41.666635   74151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I1001 20:42:41.666655   74151 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-983557 && echo "kindnet-983557" | sudo tee /etc/hostname
	I1001 20:42:41.792803   74151 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-983557
	
	I1001 20:42:41.792833   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHHostname
	I1001 20:42:41.796141   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:41.796567   74151 main.go:141] libmachine: (kindnet-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:4c:98", ip: ""} in network mk-kindnet-983557: {Iface:virbr3 ExpiryTime:2024-10-01 21:42:33 +0000 UTC Type:0 Mac:52:54:00:9d:4c:98 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:kindnet-983557 Clientid:01:52:54:00:9d:4c:98}
	I1001 20:42:41.796598   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined IP address 192.168.61.181 and MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:41.796743   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHPort
	I1001 20:42:41.796944   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHKeyPath
	I1001 20:42:41.797077   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHKeyPath
	I1001 20:42:41.797219   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHUsername
	I1001 20:42:41.797395   74151 main.go:141] libmachine: Using SSH client type: native
	I1001 20:42:41.797562   74151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I1001 20:42:41.797577   74151 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-983557' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-983557/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-983557' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 20:42:41.914775   74151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:42:41.914805   74151 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 20:42:41.914867   74151 buildroot.go:174] setting up certificates
	I1001 20:42:41.914883   74151 provision.go:84] configureAuth start
	I1001 20:42:41.914897   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetMachineName
	I1001 20:42:41.915233   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetIP
	I1001 20:42:41.918797   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:41.919244   74151 main.go:141] libmachine: (kindnet-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:4c:98", ip: ""} in network mk-kindnet-983557: {Iface:virbr3 ExpiryTime:2024-10-01 21:42:33 +0000 UTC Type:0 Mac:52:54:00:9d:4c:98 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:kindnet-983557 Clientid:01:52:54:00:9d:4c:98}
	I1001 20:42:41.919270   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined IP address 192.168.61.181 and MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:41.919449   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHHostname
	I1001 20:42:41.922142   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:41.922549   74151 main.go:141] libmachine: (kindnet-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:4c:98", ip: ""} in network mk-kindnet-983557: {Iface:virbr3 ExpiryTime:2024-10-01 21:42:33 +0000 UTC Type:0 Mac:52:54:00:9d:4c:98 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:kindnet-983557 Clientid:01:52:54:00:9d:4c:98}
	I1001 20:42:41.922602   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined IP address 192.168.61.181 and MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:41.922748   74151 provision.go:143] copyHostCerts
	I1001 20:42:41.922814   74151 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 20:42:41.922826   74151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 20:42:41.922882   74151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 20:42:41.923008   74151 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 20:42:41.923017   74151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 20:42:41.923044   74151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 20:42:41.923132   74151 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 20:42:41.923142   74151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 20:42:41.923172   74151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 20:42:41.923251   74151 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.kindnet-983557 san=[127.0.0.1 192.168.61.181 kindnet-983557 localhost minikube]
	I1001 20:42:42.013829   74151 provision.go:177] copyRemoteCerts
	I1001 20:42:42.013893   74151 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 20:42:42.013919   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHHostname
	I1001 20:42:42.017163   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:42.017617   74151 main.go:141] libmachine: (kindnet-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:4c:98", ip: ""} in network mk-kindnet-983557: {Iface:virbr3 ExpiryTime:2024-10-01 21:42:33 +0000 UTC Type:0 Mac:52:54:00:9d:4c:98 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:kindnet-983557 Clientid:01:52:54:00:9d:4c:98}
	I1001 20:42:42.017650   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined IP address 192.168.61.181 and MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:42.017883   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHPort
	I1001 20:42:42.018070   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHKeyPath
	I1001 20:42:42.018233   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHUsername
	I1001 20:42:42.018385   74151 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/kindnet-983557/id_rsa Username:docker}
	I1001 20:42:42.102102   74151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 20:42:42.126995   74151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1001 20:42:42.151704   74151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 20:42:42.176193   74151 provision.go:87] duration metric: took 261.295617ms to configureAuth
	I1001 20:42:42.176248   74151 buildroot.go:189] setting minikube options for container-runtime
	I1001 20:42:42.176460   74151 config.go:182] Loaded profile config "kindnet-983557": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:42:42.176552   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHHostname
	I1001 20:42:42.179497   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:42.179961   74151 main.go:141] libmachine: (kindnet-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:4c:98", ip: ""} in network mk-kindnet-983557: {Iface:virbr3 ExpiryTime:2024-10-01 21:42:33 +0000 UTC Type:0 Mac:52:54:00:9d:4c:98 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:kindnet-983557 Clientid:01:52:54:00:9d:4c:98}
	I1001 20:42:42.179983   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined IP address 192.168.61.181 and MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:42.180126   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHPort
	I1001 20:42:42.180328   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHKeyPath
	I1001 20:42:42.180507   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHKeyPath
	I1001 20:42:42.180648   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHUsername
	I1001 20:42:42.180815   74151 main.go:141] libmachine: Using SSH client type: native
	I1001 20:42:42.180970   74151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I1001 20:42:42.180984   74151 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 20:42:42.406201   74151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 20:42:42.406229   74151 main.go:141] libmachine: Checking connection to Docker...
	I1001 20:42:42.406237   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetURL
	I1001 20:42:42.407693   74151 main.go:141] libmachine: (kindnet-983557) DBG | Using libvirt version 6000000
	I1001 20:42:42.410201   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:42.410465   74151 main.go:141] libmachine: (kindnet-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:4c:98", ip: ""} in network mk-kindnet-983557: {Iface:virbr3 ExpiryTime:2024-10-01 21:42:33 +0000 UTC Type:0 Mac:52:54:00:9d:4c:98 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:kindnet-983557 Clientid:01:52:54:00:9d:4c:98}
	I1001 20:42:42.410488   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined IP address 192.168.61.181 and MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:42.410705   74151 main.go:141] libmachine: Docker is up and running!
	I1001 20:42:42.410724   74151 main.go:141] libmachine: Reticulating splines...
	I1001 20:42:42.410732   74151 client.go:171] duration metric: took 24.056404981s to LocalClient.Create
	I1001 20:42:42.410757   74151 start.go:167] duration metric: took 24.056470397s to libmachine.API.Create "kindnet-983557"
	I1001 20:42:42.410769   74151 start.go:293] postStartSetup for "kindnet-983557" (driver="kvm2")
	I1001 20:42:42.410782   74151 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 20:42:42.410804   74151 main.go:141] libmachine: (kindnet-983557) Calling .DriverName
	I1001 20:42:42.411070   74151 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 20:42:42.411096   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHHostname
	I1001 20:42:42.413134   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:42.413456   74151 main.go:141] libmachine: (kindnet-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:4c:98", ip: ""} in network mk-kindnet-983557: {Iface:virbr3 ExpiryTime:2024-10-01 21:42:33 +0000 UTC Type:0 Mac:52:54:00:9d:4c:98 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:kindnet-983557 Clientid:01:52:54:00:9d:4c:98}
	I1001 20:42:42.413487   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined IP address 192.168.61.181 and MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:42.413596   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHPort
	I1001 20:42:42.413748   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHKeyPath
	I1001 20:42:42.413916   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHUsername
	I1001 20:42:42.414091   74151 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/kindnet-983557/id_rsa Username:docker}
	I1001 20:42:42.496032   74151 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 20:42:42.500627   74151 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 20:42:42.500650   74151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 20:42:42.500722   74151 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 20:42:42.500819   74151 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 20:42:42.500926   74151 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 20:42:42.511759   74151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:42:42.536586   74151 start.go:296] duration metric: took 125.802161ms for postStartSetup
	I1001 20:42:42.536645   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetConfigRaw
	I1001 20:42:42.537233   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetIP
	I1001 20:42:42.539925   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:42.540295   74151 main.go:141] libmachine: (kindnet-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:4c:98", ip: ""} in network mk-kindnet-983557: {Iface:virbr3 ExpiryTime:2024-10-01 21:42:33 +0000 UTC Type:0 Mac:52:54:00:9d:4c:98 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:kindnet-983557 Clientid:01:52:54:00:9d:4c:98}
	I1001 20:42:42.540322   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined IP address 192.168.61.181 and MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:42.540598   74151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/kindnet-983557/config.json ...
	I1001 20:42:42.540855   74151 start.go:128] duration metric: took 24.207737206s to createHost
	I1001 20:42:42.540880   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHHostname
	I1001 20:42:42.543146   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:42.543505   74151 main.go:141] libmachine: (kindnet-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:4c:98", ip: ""} in network mk-kindnet-983557: {Iface:virbr3 ExpiryTime:2024-10-01 21:42:33 +0000 UTC Type:0 Mac:52:54:00:9d:4c:98 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:kindnet-983557 Clientid:01:52:54:00:9d:4c:98}
	I1001 20:42:42.543532   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined IP address 192.168.61.181 and MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:42.543711   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHPort
	I1001 20:42:42.543913   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHKeyPath
	I1001 20:42:42.544069   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHKeyPath
	I1001 20:42:42.544195   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHUsername
	I1001 20:42:42.544339   74151 main.go:141] libmachine: Using SSH client type: native
	I1001 20:42:42.544562   74151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I1001 20:42:42.544575   74151 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 20:42:42.649127   74151 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727815362.624565763
	
	I1001 20:42:42.649161   74151 fix.go:216] guest clock: 1727815362.624565763
	I1001 20:42:42.649175   74151 fix.go:229] Guest: 2024-10-01 20:42:42.624565763 +0000 UTC Remote: 2024-10-01 20:42:42.540869864 +0000 UTC m=+41.494494847 (delta=83.695899ms)
	I1001 20:42:42.649215   74151 fix.go:200] guest clock delta is within tolerance: 83.695899ms
	I1001 20:42:42.649229   74151 start.go:83] releasing machines lock for "kindnet-983557", held for 24.316282234s
	I1001 20:42:42.649260   74151 main.go:141] libmachine: (kindnet-983557) Calling .DriverName
	I1001 20:42:42.649500   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetIP
	I1001 20:42:42.652625   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:42.653068   74151 main.go:141] libmachine: (kindnet-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:4c:98", ip: ""} in network mk-kindnet-983557: {Iface:virbr3 ExpiryTime:2024-10-01 21:42:33 +0000 UTC Type:0 Mac:52:54:00:9d:4c:98 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:kindnet-983557 Clientid:01:52:54:00:9d:4c:98}
	I1001 20:42:42.653111   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined IP address 192.168.61.181 and MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:42.653319   74151 main.go:141] libmachine: (kindnet-983557) Calling .DriverName
	I1001 20:42:42.653820   74151 main.go:141] libmachine: (kindnet-983557) Calling .DriverName
	I1001 20:42:42.654025   74151 main.go:141] libmachine: (kindnet-983557) Calling .DriverName
	I1001 20:42:42.654140   74151 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 20:42:42.654178   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHHostname
	I1001 20:42:42.654273   74151 ssh_runner.go:195] Run: cat /version.json
	I1001 20:42:42.654296   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHHostname
	I1001 20:42:42.657887   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:42.657923   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:42.658318   74151 main.go:141] libmachine: (kindnet-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:4c:98", ip: ""} in network mk-kindnet-983557: {Iface:virbr3 ExpiryTime:2024-10-01 21:42:33 +0000 UTC Type:0 Mac:52:54:00:9d:4c:98 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:kindnet-983557 Clientid:01:52:54:00:9d:4c:98}
	I1001 20:42:42.658358   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined IP address 192.168.61.181 and MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:42.658458   74151 main.go:141] libmachine: (kindnet-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:4c:98", ip: ""} in network mk-kindnet-983557: {Iface:virbr3 ExpiryTime:2024-10-01 21:42:33 +0000 UTC Type:0 Mac:52:54:00:9d:4c:98 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:kindnet-983557 Clientid:01:52:54:00:9d:4c:98}
	I1001 20:42:42.658503   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHPort
	I1001 20:42:42.658509   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined IP address 192.168.61.181 and MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:42.658740   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHKeyPath
	I1001 20:42:42.658755   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHPort
	I1001 20:42:42.658889   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHUsername
	I1001 20:42:42.658963   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHKeyPath
	I1001 20:42:42.659018   74151 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/kindnet-983557/id_rsa Username:docker}
	I1001 20:42:42.659094   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetSSHUsername
	I1001 20:42:42.659212   74151 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/kindnet-983557/id_rsa Username:docker}
	I1001 20:42:42.733143   74151 ssh_runner.go:195] Run: systemctl --version
	I1001 20:42:42.767960   74151 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 20:42:42.925237   74151 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 20:42:42.931860   74151 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 20:42:42.931926   74151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 20:42:42.949173   74151 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 20:42:42.949193   74151 start.go:495] detecting cgroup driver to use...
	I1001 20:42:42.949253   74151 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 20:42:42.965779   74151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 20:42:42.980463   74151 docker.go:217] disabling cri-docker service (if available) ...
	I1001 20:42:42.980548   74151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 20:42:42.995188   74151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 20:42:43.009744   74151 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 20:42:43.129649   74151 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 20:42:43.298767   74151 docker.go:233] disabling docker service ...
	I1001 20:42:43.298840   74151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 20:42:43.314495   74151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 20:42:43.327448   74151 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 20:42:43.471328   74151 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 20:42:43.609119   74151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 20:42:43.622180   74151 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 20:42:43.639837   74151 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 20:42:43.639920   74151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:42:43.649692   74151 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 20:42:43.649766   74151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:42:43.659876   74151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:42:43.670286   74151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:42:43.680412   74151 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 20:42:43.690309   74151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:42:43.699722   74151 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:42:43.716295   74151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:42:43.726856   74151 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 20:42:43.736033   74151 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 20:42:43.736096   74151 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 20:42:43.750288   74151 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 20:42:43.764283   74151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:42:43.898850   74151 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 20:42:43.991797   74151 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 20:42:43.991894   74151 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 20:42:43.996976   74151 start.go:563] Will wait 60s for crictl version
	I1001 20:42:43.997029   74151 ssh_runner.go:195] Run: which crictl
	I1001 20:42:44.000323   74151 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 20:42:44.042185   74151 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 20:42:44.042271   74151 ssh_runner.go:195] Run: crio --version
	I1001 20:42:44.071135   74151 ssh_runner.go:195] Run: crio --version
	I1001 20:42:44.099914   74151 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 20:42:44.101113   74151 main.go:141] libmachine: (kindnet-983557) Calling .GetIP
	I1001 20:42:44.104404   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:44.104876   74151 main.go:141] libmachine: (kindnet-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:4c:98", ip: ""} in network mk-kindnet-983557: {Iface:virbr3 ExpiryTime:2024-10-01 21:42:33 +0000 UTC Type:0 Mac:52:54:00:9d:4c:98 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:kindnet-983557 Clientid:01:52:54:00:9d:4c:98}
	I1001 20:42:44.104932   74151 main.go:141] libmachine: (kindnet-983557) DBG | domain kindnet-983557 has defined IP address 192.168.61.181 and MAC address 52:54:00:9d:4c:98 in network mk-kindnet-983557
	I1001 20:42:44.105161   74151 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1001 20:42:44.109048   74151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:42:44.121337   74151 kubeadm.go:883] updating cluster {Name:kindnet-983557 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:kindnet-983557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.61.181 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 20:42:44.121439   74151 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 20:42:44.121482   74151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:42:44.157337   74151 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1001 20:42:44.157415   74151 ssh_runner.go:195] Run: which lz4
	I1001 20:42:44.161156   74151 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 20:42:44.164983   74151 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 20:42:44.165016   74151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1001 20:42:45.491227   74151 crio.go:462] duration metric: took 1.330106252s to copy over tarball
	I1001 20:42:45.491289   74151 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 20:42:42.437418   73835 pod_ready.go:103] pod "coredns-7c65d6cfc9-99krt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:42:44.439051   73835 pod_ready.go:103] pod "coredns-7c65d6cfc9-99krt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:42:46.937456   73835 pod_ready.go:103] pod "coredns-7c65d6cfc9-99krt" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.312370266Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815368312295152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a10a649-28b8-4217-a59c-aee097f80621 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.313229363Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ef9c07d-cc83-4223-9cbe-c377172e1b76 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.313285233Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ef9c07d-cc83-4223-9cbe-c377172e1b76 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.313502813Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:49956e29a325cfda62a0b0ddf30ac17398312cc9fbef9933a45a20dd90e9d7f4,PodSandboxId:e7bd7a99780ccbbee9f2f3eadc66d382e572973514dcbb35c1d84129b78e4764,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814382711276531,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aaab1f2-8361-46c6-88be-ed9004628715,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed6a9cbaccaf595eaf1508b7e27e572fc3d6bb42981beec1a8ba77ddc80490e,PodSandboxId:bc15115378909caaa1b9f904887679d07d8298120f67307b523c1559feafb4de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814381604651449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5ms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 652fcc3d-ae12-4e11-b212-8891c1c05701,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bca7dc0957012fce7669cf95da2da702d227ffbdd5ed0171872b58719c908e8d,PodSandboxId:d7fd735c09a752b6ed7dd40c2af00729c730e4363260e62626e05dc9d5ae7c23,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814381450670970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wfdwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
174cd48-6855-4813-9ecd-3b3a82386720,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b21a0cbb3a52290e36564c48071015faf99a296f89f200bcfa148a3c95d76ca,PodSandboxId:1bddc641693b85ab307065a31ca507e1e70676cfc4d85b42faaa6ebb70db7376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727814381081947263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fjnvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 728b1b90-5961-45e9-9818-8fc6f6db1634,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a99aae7d75e2fa9c02da40f811e88ccdfb98330c078417c46dda7892065ec0,PodSandboxId:d59aaf738c583d020c40193e07e23efc8334d9ec12fb24378780b4bc1a11f9e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727814370131874920,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20c73fd156c9e4f64240f6fa41d9888d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71bb11a38d4d3cee348c612978d120ef0b43650039509e95f793c5c224aab74,PodSandboxId:352152c2449c88805800055ddf9aa37ab049449f10b9842b68bf647ec87d184c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814370109737135,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a509b4a275e96f7e1fb9a5675e98f42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49dc9b41de775e3af100f0fba4f1a15993b61c34b27af01f776f85419b41a10e,PodSandboxId:16096076c09d5cc2c26167d746eb591295f0cfc58d72654c3f49fcdd317ac88d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727814370087503165,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd4bc15ab11faab9c227d13239baa161,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58bdf5137deb5d4626b9591e8a3ecd2e91299386f1e65875d27005f5c7848a16,PodSandboxId:62be514e87c68085f9432f46d952a9af9d16e56a50a769cea308ec4f39d0fb00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727814370034180117,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a52129df49edfb54a3732fda1a5b47c2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc8e0b8fa08173938da2c5f8e2005d704509fdce496896cf1760962e4b7c749,PodSandboxId:0c60aee3aed0249253019fd569881f45bff179141c40c212cf45ba441f80acfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727814083077444753,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd4bc15ab11faab9c227d13239baa161,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ef9c07d-cc83-4223-9cbe-c377172e1b76 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.350894700Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a018f0a-8e99-4d80-ae40-15d12b0c3e3b name=/runtime.v1.RuntimeService/Version
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.350996413Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a018f0a-8e99-4d80-ae40-15d12b0c3e3b name=/runtime.v1.RuntimeService/Version
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.352596539Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0732afaf-304d-4edd-8b5d-37932a5e8aab name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.353389137Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815368353358552,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0732afaf-304d-4edd-8b5d-37932a5e8aab name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.354478429Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ea7446d-8b30-4d0a-aefc-a8508bf24028 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.354563226Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ea7446d-8b30-4d0a-aefc-a8508bf24028 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.355216450Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:49956e29a325cfda62a0b0ddf30ac17398312cc9fbef9933a45a20dd90e9d7f4,PodSandboxId:e7bd7a99780ccbbee9f2f3eadc66d382e572973514dcbb35c1d84129b78e4764,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814382711276531,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aaab1f2-8361-46c6-88be-ed9004628715,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed6a9cbaccaf595eaf1508b7e27e572fc3d6bb42981beec1a8ba77ddc80490e,PodSandboxId:bc15115378909caaa1b9f904887679d07d8298120f67307b523c1559feafb4de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814381604651449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5ms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 652fcc3d-ae12-4e11-b212-8891c1c05701,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bca7dc0957012fce7669cf95da2da702d227ffbdd5ed0171872b58719c908e8d,PodSandboxId:d7fd735c09a752b6ed7dd40c2af00729c730e4363260e62626e05dc9d5ae7c23,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814381450670970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wfdwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
174cd48-6855-4813-9ecd-3b3a82386720,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b21a0cbb3a52290e36564c48071015faf99a296f89f200bcfa148a3c95d76ca,PodSandboxId:1bddc641693b85ab307065a31ca507e1e70676cfc4d85b42faaa6ebb70db7376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727814381081947263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fjnvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 728b1b90-5961-45e9-9818-8fc6f6db1634,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a99aae7d75e2fa9c02da40f811e88ccdfb98330c078417c46dda7892065ec0,PodSandboxId:d59aaf738c583d020c40193e07e23efc8334d9ec12fb24378780b4bc1a11f9e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727814370131874920,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20c73fd156c9e4f64240f6fa41d9888d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71bb11a38d4d3cee348c612978d120ef0b43650039509e95f793c5c224aab74,PodSandboxId:352152c2449c88805800055ddf9aa37ab049449f10b9842b68bf647ec87d184c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814370109737135,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a509b4a275e96f7e1fb9a5675e98f42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49dc9b41de775e3af100f0fba4f1a15993b61c34b27af01f776f85419b41a10e,PodSandboxId:16096076c09d5cc2c26167d746eb591295f0cfc58d72654c3f49fcdd317ac88d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727814370087503165,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd4bc15ab11faab9c227d13239baa161,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58bdf5137deb5d4626b9591e8a3ecd2e91299386f1e65875d27005f5c7848a16,PodSandboxId:62be514e87c68085f9432f46d952a9af9d16e56a50a769cea308ec4f39d0fb00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727814370034180117,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a52129df49edfb54a3732fda1a5b47c2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc8e0b8fa08173938da2c5f8e2005d704509fdce496896cf1760962e4b7c749,PodSandboxId:0c60aee3aed0249253019fd569881f45bff179141c40c212cf45ba441f80acfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727814083077444753,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd4bc15ab11faab9c227d13239baa161,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ea7446d-8b30-4d0a-aefc-a8508bf24028 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.401230836Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2696b9ac-684a-4880-a818-8b4b6527285c name=/runtime.v1.RuntimeService/Version
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.401325456Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2696b9ac-684a-4880-a818-8b4b6527285c name=/runtime.v1.RuntimeService/Version
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.403146170Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97ec0018-7b42-4950-b97e-f7d48c7ad80b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.403730628Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815368403702361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97ec0018-7b42-4950-b97e-f7d48c7ad80b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.404718979Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1f65c32-ca04-4871-9aeb-0b7c700abb91 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.404787233Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1f65c32-ca04-4871-9aeb-0b7c700abb91 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.405205876Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:49956e29a325cfda62a0b0ddf30ac17398312cc9fbef9933a45a20dd90e9d7f4,PodSandboxId:e7bd7a99780ccbbee9f2f3eadc66d382e572973514dcbb35c1d84129b78e4764,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814382711276531,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aaab1f2-8361-46c6-88be-ed9004628715,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed6a9cbaccaf595eaf1508b7e27e572fc3d6bb42981beec1a8ba77ddc80490e,PodSandboxId:bc15115378909caaa1b9f904887679d07d8298120f67307b523c1559feafb4de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814381604651449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5ms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 652fcc3d-ae12-4e11-b212-8891c1c05701,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bca7dc0957012fce7669cf95da2da702d227ffbdd5ed0171872b58719c908e8d,PodSandboxId:d7fd735c09a752b6ed7dd40c2af00729c730e4363260e62626e05dc9d5ae7c23,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814381450670970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wfdwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
174cd48-6855-4813-9ecd-3b3a82386720,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b21a0cbb3a52290e36564c48071015faf99a296f89f200bcfa148a3c95d76ca,PodSandboxId:1bddc641693b85ab307065a31ca507e1e70676cfc4d85b42faaa6ebb70db7376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727814381081947263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fjnvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 728b1b90-5961-45e9-9818-8fc6f6db1634,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a99aae7d75e2fa9c02da40f811e88ccdfb98330c078417c46dda7892065ec0,PodSandboxId:d59aaf738c583d020c40193e07e23efc8334d9ec12fb24378780b4bc1a11f9e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727814370131874920,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20c73fd156c9e4f64240f6fa41d9888d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71bb11a38d4d3cee348c612978d120ef0b43650039509e95f793c5c224aab74,PodSandboxId:352152c2449c88805800055ddf9aa37ab049449f10b9842b68bf647ec87d184c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814370109737135,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a509b4a275e96f7e1fb9a5675e98f42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49dc9b41de775e3af100f0fba4f1a15993b61c34b27af01f776f85419b41a10e,PodSandboxId:16096076c09d5cc2c26167d746eb591295f0cfc58d72654c3f49fcdd317ac88d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727814370087503165,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd4bc15ab11faab9c227d13239baa161,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58bdf5137deb5d4626b9591e8a3ecd2e91299386f1e65875d27005f5c7848a16,PodSandboxId:62be514e87c68085f9432f46d952a9af9d16e56a50a769cea308ec4f39d0fb00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727814370034180117,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a52129df49edfb54a3732fda1a5b47c2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc8e0b8fa08173938da2c5f8e2005d704509fdce496896cf1760962e4b7c749,PodSandboxId:0c60aee3aed0249253019fd569881f45bff179141c40c212cf45ba441f80acfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727814083077444753,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd4bc15ab11faab9c227d13239baa161,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a1f65c32-ca04-4871-9aeb-0b7c700abb91 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.449270291Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e6c63a8-9935-4d34-af3c-ee43c6618f44 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.449358580Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e6c63a8-9935-4d34-af3c-ee43c6618f44 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.450474864Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a63e2ff8-e62a-4961-b86c-cdd2d3755e4d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.450880251Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815368450857954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a63e2ff8-e62a-4961-b86c-cdd2d3755e4d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.451821964Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=608c7ac5-e987-488f-a873-b5e5e7c1f348 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.451899958Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=608c7ac5-e987-488f-a873-b5e5e7c1f348 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:42:48 embed-certs-106982 crio[716]: time="2024-10-01 20:42:48.452146083Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:49956e29a325cfda62a0b0ddf30ac17398312cc9fbef9933a45a20dd90e9d7f4,PodSandboxId:e7bd7a99780ccbbee9f2f3eadc66d382e572973514dcbb35c1d84129b78e4764,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814382711276531,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aaab1f2-8361-46c6-88be-ed9004628715,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed6a9cbaccaf595eaf1508b7e27e572fc3d6bb42981beec1a8ba77ddc80490e,PodSandboxId:bc15115378909caaa1b9f904887679d07d8298120f67307b523c1559feafb4de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814381604651449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rq5ms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 652fcc3d-ae12-4e11-b212-8891c1c05701,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bca7dc0957012fce7669cf95da2da702d227ffbdd5ed0171872b58719c908e8d,PodSandboxId:d7fd735c09a752b6ed7dd40c2af00729c730e4363260e62626e05dc9d5ae7c23,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814381450670970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wfdwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
174cd48-6855-4813-9ecd-3b3a82386720,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b21a0cbb3a52290e36564c48071015faf99a296f89f200bcfa148a3c95d76ca,PodSandboxId:1bddc641693b85ab307065a31ca507e1e70676cfc4d85b42faaa6ebb70db7376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727814381081947263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fjnvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 728b1b90-5961-45e9-9818-8fc6f6db1634,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a99aae7d75e2fa9c02da40f811e88ccdfb98330c078417c46dda7892065ec0,PodSandboxId:d59aaf738c583d020c40193e07e23efc8334d9ec12fb24378780b4bc1a11f9e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727814370131874920,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20c73fd156c9e4f64240f6fa41d9888d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71bb11a38d4d3cee348c612978d120ef0b43650039509e95f793c5c224aab74,PodSandboxId:352152c2449c88805800055ddf9aa37ab049449f10b9842b68bf647ec87d184c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814370109737135,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a509b4a275e96f7e1fb9a5675e98f42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49dc9b41de775e3af100f0fba4f1a15993b61c34b27af01f776f85419b41a10e,PodSandboxId:16096076c09d5cc2c26167d746eb591295f0cfc58d72654c3f49fcdd317ac88d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727814370087503165,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd4bc15ab11faab9c227d13239baa161,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58bdf5137deb5d4626b9591e8a3ecd2e91299386f1e65875d27005f5c7848a16,PodSandboxId:62be514e87c68085f9432f46d952a9af9d16e56a50a769cea308ec4f39d0fb00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727814370034180117,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a52129df49edfb54a3732fda1a5b47c2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc8e0b8fa08173938da2c5f8e2005d704509fdce496896cf1760962e4b7c749,PodSandboxId:0c60aee3aed0249253019fd569881f45bff179141c40c212cf45ba441f80acfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727814083077444753,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-106982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd4bc15ab11faab9c227d13239baa161,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=608c7ac5-e987-488f-a873-b5e5e7c1f348 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	49956e29a325c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   e7bd7a99780cc       storage-provisioner
	bed6a9cbaccaf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   bc15115378909       coredns-7c65d6cfc9-rq5ms
	bca7dc0957012       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   d7fd735c09a75       coredns-7c65d6cfc9-wfdwp
	7b21a0cbb3a52       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   16 minutes ago      Running             kube-proxy                0                   1bddc641693b8       kube-proxy-fjnvc
	f0a99aae7d75e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   d59aaf738c583       etcd-embed-certs-106982
	b71bb11a38d4d       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   16 minutes ago      Running             kube-scheduler            2                   352152c2449c8       kube-scheduler-embed-certs-106982
	49dc9b41de775       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   16 minutes ago      Running             kube-apiserver            2                   16096076c09d5       kube-apiserver-embed-certs-106982
	58bdf5137deb5       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   16 minutes ago      Running             kube-controller-manager   2                   62be514e87c68       kube-controller-manager-embed-certs-106982
	bfc8e0b8fa081       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   21 minutes ago      Exited              kube-apiserver            1                   0c60aee3aed02       kube-apiserver-embed-certs-106982
	
	
	==> coredns [bca7dc0957012fce7669cf95da2da702d227ffbdd5ed0171872b58719c908e8d] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [bed6a9cbaccaf595eaf1508b7e27e572fc3d6bb42981beec1a8ba77ddc80490e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-106982
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-106982
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=embed-certs-106982
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T20_26_16_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 20:26:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-106982
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 20:42:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 20:41:44 +0000   Tue, 01 Oct 2024 20:26:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 20:41:44 +0000   Tue, 01 Oct 2024 20:26:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 20:41:44 +0000   Tue, 01 Oct 2024 20:26:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 20:41:44 +0000   Tue, 01 Oct 2024 20:26:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    embed-certs-106982
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd30dfe38c9a4961913c765d396796b3
	  System UUID:                cd30dfe3-8c9a-4961-913c-765d396796b3
	  Boot ID:                    774f8b5c-9259-48db-98ed-09e0764a8164
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-rq5ms                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-wfdwp                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-embed-certs-106982                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-embed-certs-106982             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-embed-certs-106982    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-fjnvc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-embed-certs-106982             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-z27sl               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node embed-certs-106982 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node embed-certs-106982 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node embed-certs-106982 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node embed-certs-106982 event: Registered Node embed-certs-106982 in Controller
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056448] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039315] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Oct 1 20:21] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.005356] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.347801] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.228031] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.147610] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.208860] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.162576] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.346586] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[  +4.553015] systemd-fstab-generator[797]: Ignoring "noauto" option for root device
	[  +0.069552] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.380245] systemd-fstab-generator[920]: Ignoring "noauto" option for root device
	[  +5.676723] kauditd_printk_skb: 97 callbacks suppressed
	[  +8.106833] kauditd_printk_skb: 85 callbacks suppressed
	[Oct 1 20:26] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.380165] systemd-fstab-generator[2572]: Ignoring "noauto" option for root device
	[  +4.443944] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.925941] systemd-fstab-generator[2894]: Ignoring "noauto" option for root device
	[  +5.411887] systemd-fstab-generator[3028]: Ignoring "noauto" option for root device
	[  +0.038526] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.343296] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [f0a99aae7d75e2fa9c02da40f811e88ccdfb98330c078417c46dda7892065ec0] <==
	{"level":"info","ts":"2024-10-01T20:26:11.306878Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T20:26:11.318810Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-01T20:29:52.146102Z","caller":"traceutil/trace.go:171","msg":"trace[1832817733] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"131.567452ms","start":"2024-10-01T20:29:52.014428Z","end":"2024-10-01T20:29:52.145996Z","steps":["trace[1832817733] 'process raft request'  (duration: 131.463213ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T20:36:11.371621Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":685}
	{"level":"info","ts":"2024-10-01T20:36:11.380673Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":685,"took":"8.713883ms","hash":1803009240,"current-db-size-bytes":2158592,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2158592,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-10-01T20:36:11.380774Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1803009240,"revision":685,"compact-revision":-1}
	{"level":"info","ts":"2024-10-01T20:41:11.381732Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":928}
	{"level":"info","ts":"2024-10-01T20:41:11.386233Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":928,"took":"3.752098ms","hash":2037474365,"current-db-size-bytes":2158592,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1548288,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-10-01T20:41:11.386332Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2037474365,"revision":928,"compact-revision":685}
	{"level":"info","ts":"2024-10-01T20:41:44.348682Z","caller":"traceutil/trace.go:171","msg":"trace[1885288390] transaction","detail":"{read_only:false; response_revision:1199; number_of_response:1; }","duration":"446.705028ms","start":"2024-10-01T20:41:43.901942Z","end":"2024-10-01T20:41:44.348647Z","steps":["trace[1885288390] 'process raft request'  (duration: 446.561109ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:41:44.349477Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T20:41:43.901897Z","time spent":"446.855685ms","remote":"127.0.0.1:42224","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1198 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-01T20:41:44.607539Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.831464ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.203\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-10-01T20:41:44.607699Z","caller":"traceutil/trace.go:171","msg":"trace[792653452] range","detail":"{range_begin:/registry/masterleases/192.168.39.203; range_end:; response_count:1; response_revision:1199; }","duration":"173.996542ms","start":"2024-10-01T20:41:44.433690Z","end":"2024-10-01T20:41:44.607687Z","steps":["trace[792653452] 'range keys from in-memory index tree'  (duration: 173.623359ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:41:44.607528Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"233.894224ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T20:41:44.607878Z","caller":"traceutil/trace.go:171","msg":"trace[1722446933] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1199; }","duration":"234.274085ms","start":"2024-10-01T20:41:44.373597Z","end":"2024-10-01T20:41:44.607871Z","steps":["trace[1722446933] 'range keys from in-memory index tree'  (duration: 233.777252ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T20:41:44.808765Z","caller":"traceutil/trace.go:171","msg":"trace[1413455056] transaction","detail":"{read_only:false; response_revision:1200; number_of_response:1; }","duration":"286.920361ms","start":"2024-10-01T20:41:44.521822Z","end":"2024-10-01T20:41:44.808742Z","steps":["trace[1413455056] 'process raft request'  (duration: 286.782822ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:41:44.998049Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T20:41:44.608811Z","time spent":"389.198853ms","remote":"127.0.0.1:42086","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-10-01T20:41:44.998204Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"341.51197ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-01T20:41:44.998256Z","caller":"traceutil/trace.go:171","msg":"trace[945212484] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:0; response_revision:1200; }","duration":"341.581721ms","start":"2024-10-01T20:41:44.656663Z","end":"2024-10-01T20:41:44.998245Z","steps":["trace[945212484] 'agreement among raft nodes before linearized reading'  (duration: 341.448572ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:41:44.998284Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T20:41:44.656630Z","time spent":"341.64722ms","remote":"127.0.0.1:42364","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":68,"response size":29,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" count_only:true "}
	{"level":"info","ts":"2024-10-01T20:41:44.998082Z","caller":"traceutil/trace.go:171","msg":"trace[1399004158] linearizableReadLoop","detail":"{readStateIndex:1403; appliedIndex:1402; }","duration":"341.312648ms","start":"2024-10-01T20:41:44.656680Z","end":"2024-10-01T20:41:44.997993Z","steps":["trace[1399004158] 'read index received'  (duration: 152.368775ms)","trace[1399004158] 'applied index is now lower than readState.Index'  (duration: 188.894809ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-01T20:42:26.835377Z","caller":"traceutil/trace.go:171","msg":"trace[818706601] linearizableReadLoop","detail":"{readStateIndex:1446; appliedIndex:1445; }","duration":"145.191125ms","start":"2024-10-01T20:42:26.690155Z","end":"2024-10-01T20:42:26.835347Z","steps":["trace[818706601] 'read index received'  (duration: 144.981071ms)","trace[818706601] 'applied index is now lower than readState.Index'  (duration: 209.383µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-01T20:42:26.835599Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.374861ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-01T20:42:26.835651Z","caller":"traceutil/trace.go:171","msg":"trace[845827604] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:0; response_revision:1235; }","duration":"145.500482ms","start":"2024-10-01T20:42:26.690137Z","end":"2024-10-01T20:42:26.835637Z","steps":["trace[845827604] 'agreement among raft nodes before linearized reading'  (duration: 145.340412ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T20:42:26.835844Z","caller":"traceutil/trace.go:171","msg":"trace[2122561207] transaction","detail":"{read_only:false; response_revision:1235; number_of_response:1; }","duration":"231.589499ms","start":"2024-10-01T20:42:26.604235Z","end":"2024-10-01T20:42:26.835825Z","steps":["trace[2122561207] 'process raft request'  (duration: 230.972886ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:42:48 up 21 min,  0 users,  load average: 0.00, 0.10, 0.12
	Linux embed-certs-106982 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [49dc9b41de775e3af100f0fba4f1a15993b61c34b27af01f776f85419b41a10e] <==
	I1001 20:39:13.704481       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1001 20:39:13.704495       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1001 20:41:12.702766       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:41:12.703380       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1001 20:41:13.705187       1 handler_proxy.go:99] no RequestInfo found in the context
	W1001 20:41:13.705187       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:41:13.705440       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1001 20:41:13.705617       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1001 20:41:13.706830       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1001 20:41:13.706932       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1001 20:42:13.707879       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:42:13.707961       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1001 20:42:13.708065       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:42:13.708091       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1001 20:42:13.709107       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1001 20:42:13.709116       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [bfc8e0b8fa08173938da2c5f8e2005d704509fdce496896cf1760962e4b7c749] <==
	W1001 20:26:03.223186       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.279381       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.286972       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.286996       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.360873       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.369537       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.425926       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.428419       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.489437       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.514140       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.531560       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.578562       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.578658       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.592434       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.594899       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.618507       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.652718       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.731813       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.801757       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.810727       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.817923       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.909354       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.985372       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:03.988912       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:26:04.110814       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [58bdf5137deb5d4626b9591e8a3ecd2e91299386f1e65875d27005f5c7848a16] <==
	I1001 20:37:31.743803       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="219.895µs"
	I1001 20:37:44.740346       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="51.602µs"
	E1001 20:37:49.769884       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:37:50.245003       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:38:19.776755       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:38:20.253996       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:38:49.783792       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:38:50.262444       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:39:19.791200       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:39:20.272721       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:39:49.798923       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:39:50.281293       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:40:19.805678       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:40:20.289589       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:40:49.812588       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:40:50.299952       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:41:19.820595       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:41:20.309540       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1001 20:41:44.812517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-106982"
	E1001 20:41:49.828195       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:41:50.318673       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:42:19.834552       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:42:20.329733       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1001 20:42:29.744302       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="237.039µs"
	I1001 20:42:41.747232       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="96.529µs"
	
	
	==> kube-proxy [7b21a0cbb3a52290e36564c48071015faf99a296f89f200bcfa148a3c95d76ca] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 20:26:21.769105       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 20:26:21.796501       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.203"]
	E1001 20:26:21.796580       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 20:26:22.113375       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 20:26:22.113421       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 20:26:22.113449       1 server_linux.go:169] "Using iptables Proxier"
	I1001 20:26:22.150369       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 20:26:22.150669       1 server.go:483] "Version info" version="v1.31.1"
	I1001 20:26:22.150681       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 20:26:22.169837       1 config.go:199] "Starting service config controller"
	I1001 20:26:22.170004       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 20:26:22.170168       1 config.go:105] "Starting endpoint slice config controller"
	I1001 20:26:22.170175       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 20:26:22.174818       1 config.go:328] "Starting node config controller"
	I1001 20:26:22.174836       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 20:26:22.270173       1 shared_informer.go:320] Caches are synced for service config
	I1001 20:26:22.270249       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 20:26:22.275100       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b71bb11a38d4d3cee348c612978d120ef0b43650039509e95f793c5c224aab74] <==
	W1001 20:26:12.752358       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1001 20:26:12.752440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:26:12.752509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1001 20:26:12.752545       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 20:26:12.752632       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1001 20:26:12.752672       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1001 20:26:12.752699       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E1001 20:26:12.752670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 20:26:13.580480       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1001 20:26:13.580515       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:26:13.631812       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1001 20:26:13.631936       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 20:26:13.683188       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 20:26:13.683233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:26:13.739263       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 20:26:13.739376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1001 20:26:13.842723       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1001 20:26:13.842825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:26:13.956364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1001 20:26:13.956564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 20:26:13.995541       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 20:26:13.995964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:26:14.010240       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1001 20:26:14.010446       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1001 20:26:15.825099       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 01 20:41:49 embed-certs-106982 kubelet[2901]: E1001 20:41:49.725867    2901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-z27sl" podUID="dd1b4cdc-5ffb-4214-96c9-569ef4f7ba09"
	Oct 01 20:41:56 embed-certs-106982 kubelet[2901]: E1001 20:41:56.062757    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815316062475005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:41:56 embed-certs-106982 kubelet[2901]: E1001 20:41:56.063333    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815316062475005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:42:04 embed-certs-106982 kubelet[2901]: E1001 20:42:04.724988    2901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-z27sl" podUID="dd1b4cdc-5ffb-4214-96c9-569ef4f7ba09"
	Oct 01 20:42:06 embed-certs-106982 kubelet[2901]: E1001 20:42:06.066504    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815326065747477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:42:06 embed-certs-106982 kubelet[2901]: E1001 20:42:06.066556    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815326065747477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:42:15 embed-certs-106982 kubelet[2901]: E1001 20:42:15.753589    2901 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 20:42:15 embed-certs-106982 kubelet[2901]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 20:42:15 embed-certs-106982 kubelet[2901]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 20:42:15 embed-certs-106982 kubelet[2901]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 20:42:15 embed-certs-106982 kubelet[2901]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 20:42:16 embed-certs-106982 kubelet[2901]: E1001 20:42:16.068522    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815336068138094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:42:16 embed-certs-106982 kubelet[2901]: E1001 20:42:16.068553    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815336068138094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:42:17 embed-certs-106982 kubelet[2901]: E1001 20:42:17.744762    2901 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 01 20:42:17 embed-certs-106982 kubelet[2901]: E1001 20:42:17.745286    2901 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 01 20:42:17 embed-certs-106982 kubelet[2901]: E1001 20:42:17.745641    2901 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-df2j5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-z27sl_kube-system(dd1b4cdc-5ffb-4214-96c9-569ef4f7ba09): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Oct 01 20:42:17 embed-certs-106982 kubelet[2901]: E1001 20:42:17.748248    2901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-z27sl" podUID="dd1b4cdc-5ffb-4214-96c9-569ef4f7ba09"
	Oct 01 20:42:26 embed-certs-106982 kubelet[2901]: E1001 20:42:26.071173    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815346070494592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:42:26 embed-certs-106982 kubelet[2901]: E1001 20:42:26.071212    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815346070494592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:42:29 embed-certs-106982 kubelet[2901]: E1001 20:42:29.725959    2901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-z27sl" podUID="dd1b4cdc-5ffb-4214-96c9-569ef4f7ba09"
	Oct 01 20:42:36 embed-certs-106982 kubelet[2901]: E1001 20:42:36.073721    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815356072856123,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:42:36 embed-certs-106982 kubelet[2901]: E1001 20:42:36.074122    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815356072856123,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:42:41 embed-certs-106982 kubelet[2901]: E1001 20:42:41.727442    2901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-z27sl" podUID="dd1b4cdc-5ffb-4214-96c9-569ef4f7ba09"
	Oct 01 20:42:46 embed-certs-106982 kubelet[2901]: E1001 20:42:46.076702    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815366076243965,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:42:46 embed-certs-106982 kubelet[2901]: E1001 20:42:46.077151    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815366076243965,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [49956e29a325cfda62a0b0ddf30ac17398312cc9fbef9933a45a20dd90e9d7f4] <==
	I1001 20:26:22.820801       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 20:26:22.845574       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 20:26:22.845724       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 20:26:22.876754       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 20:26:22.881463       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-106982_256f5572-54ed-4aa8-89f4-d87bbab7310b!
	I1001 20:26:22.880322       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b444fdf4-8983-4279-a53d-46efe0483287", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-106982_256f5572-54ed-4aa8-89f4-d87bbab7310b became leader
	I1001 20:26:22.982503       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-106982_256f5572-54ed-4aa8-89f4-d87bbab7310b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-106982 -n embed-certs-106982
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-106982 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-z27sl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-106982 describe pod metrics-server-6867b74b74-z27sl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-106982 describe pod metrics-server-6867b74b74-z27sl: exit status 1 (64.593228ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-z27sl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-106982 describe pod metrics-server-6867b74b74-z27sl: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (440.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (370.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-262337 -n no-preload-262337
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-01 20:41:58.112595121 +0000 UTC m=+6469.061398515
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-262337 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-262337 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.784µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-262337 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-262337 -n no-preload-262337
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-262337 logs -n 25
E1001 20:41:59.025510   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-262337 logs -n 25: (1.17545779s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-262337                                   | no-preload-262337            | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC | 01 Oct 24 20:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-106982                 | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-556200 | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:16 UTC |
	|         | disable-driver-mounts-556200                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:21 UTC |
	|         | default-k8s-diff-port-878552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-106982                                  | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-359369                              | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:17 UTC | 01 Oct 24 20:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-359369             | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:17 UTC | 01 Oct 24 20:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-359369                              | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-878552  | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:22 UTC | 01 Oct 24 20:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:22 UTC |                     |
	|         | default-k8s-diff-port-878552                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-878552       | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:24 UTC | 01 Oct 24 20:34 UTC |
	|         | default-k8s-diff-port-878552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-359369                              | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:40 UTC | 01 Oct 24 20:40 UTC |
	| start   | -p newest-cni-204654 --memory=2200 --alsologtostderr   | newest-cni-204654            | jenkins | v1.34.0 | 01 Oct 24 20:40 UTC | 01 Oct 24 20:41 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-204654             | newest-cni-204654            | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-204654                                   | newest-cni-204654            | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:41 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-204654                  | newest-cni-204654            | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:41 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-204654 --memory=2200 --alsologtostderr   | newest-cni-204654            | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:41 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-204654 image list                           | newest-cni-204654            | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:41 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-204654                                   | newest-cni-204654            | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:41 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-204654                                   | newest-cni-204654            | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:41 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-204654                                   | newest-cni-204654            | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:41 UTC |
	| delete  | -p newest-cni-204654                                   | newest-cni-204654            | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:41 UTC |
	| start   | -p auto-983557 --memory=3072                           | auto-983557                  | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 20:41:57
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 20:41:57.130715   73835 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:41:57.130960   73835 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:41:57.130968   73835 out.go:358] Setting ErrFile to fd 2...
	I1001 20:41:57.130972   73835 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:41:57.131140   73835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 20:41:57.131707   73835 out.go:352] Setting JSON to false
	I1001 20:41:57.132658   73835 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8659,"bootTime":1727806658,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 20:41:57.132769   73835 start.go:139] virtualization: kvm guest
	I1001 20:41:57.134653   73835 out.go:177] * [auto-983557] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 20:41:57.135752   73835 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 20:41:57.135774   73835 notify.go:220] Checking for updates...
	I1001 20:41:57.137830   73835 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 20:41:57.139060   73835 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:41:57.140214   73835 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:41:57.141468   73835 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 20:41:57.142638   73835 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 20:41:57.144248   73835 config.go:182] Loaded profile config "default-k8s-diff-port-878552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:41:57.144415   73835 config.go:182] Loaded profile config "embed-certs-106982": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:41:57.144512   73835 config.go:182] Loaded profile config "no-preload-262337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:41:57.144589   73835 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 20:41:57.182723   73835 out.go:177] * Using the kvm2 driver based on user configuration
	I1001 20:41:57.183857   73835 start.go:297] selected driver: kvm2
	I1001 20:41:57.183874   73835 start.go:901] validating driver "kvm2" against <nil>
	I1001 20:41:57.183889   73835 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 20:41:57.184730   73835 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:41:57.184819   73835 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 20:41:57.201035   73835 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 20:41:57.201087   73835 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 20:41:57.201352   73835 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:41:57.201397   73835 cni.go:84] Creating CNI manager for ""
	I1001 20:41:57.201456   73835 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:41:57.201470   73835 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 20:41:57.201539   73835 start.go:340] cluster config:
	{Name:auto-983557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-983557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:41:57.201654   73835 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:41:57.203184   73835 out.go:177] * Starting "auto-983557" primary control-plane node in "auto-983557" cluster
	I1001 20:41:57.204145   73835 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 20:41:57.204193   73835 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 20:41:57.204205   73835 cache.go:56] Caching tarball of preloaded images
	I1001 20:41:57.204285   73835 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 20:41:57.204300   73835 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 20:41:57.204426   73835 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/config.json ...
	I1001 20:41:57.204450   73835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/auto-983557/config.json: {Name:mk8e2e2b2b6c27a5b664f817b7f5806389373543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:41:57.204609   73835 start.go:360] acquireMachinesLock for auto-983557: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 20:41:57.204647   73835 start.go:364] duration metric: took 19.167µs to acquireMachinesLock for "auto-983557"
	I1001 20:41:57.204671   73835 start.go:93] Provisioning new machine with config: &{Name:auto-983557 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-983557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 20:41:57.204729   73835 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.803412682Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a7040bc2-aaca-4b8b-8a26-49c0cee2899e name=/runtime.v1.RuntimeService/Version
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.804723027Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b182e537-393b-47b4-9ff0-3ef48fb47d16 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.805077429Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815318805055632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b182e537-393b-47b4-9ff0-3ef48fb47d16 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.805655056Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5e8fd84-c15c-48d5-b9a1-ff832cdaab00 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.805724541Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5e8fd84-c15c-48d5-b9a1-ff832cdaab00 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.805984274Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa,PodSandboxId:0a88665266cc2f08e215002897cb59c7a8c13f13b19399b21fde199aa9cd896a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814170931281786,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8832193a-39b4-49b9-b943-3241bb27fb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3981814eef46226993c1c4a4edb27e11c712d927d02d3108947611a0d4d6b389,PodSandboxId:5ffc250487ecf1179d6a16e31379ac9ab453100b694e252acc70d6597f920522,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727814151245737591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 815f5080-dfac-4639-8d4d-799975d8f0e1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6,PodSandboxId:6cd49ba952eafac891af87b63cb3223c25ee3f217375a043fafdae31bdbadb89,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814147774153625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g8jf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fbddef1-a564-4ee8-ab53-ae838d0fd984,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b,PodSandboxId:0a88665266cc2f08e215002897cb59c7a8c13f13b19399b21fde199aa9cd896a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727814140178687240,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
832193a-39b4-49b9-b943-3241bb27fb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8,PodSandboxId:dd90e7d68df5dfcc902fa373cc2aab0991248560d4a60f8989f9c31aee11c584,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727814140143266253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e25a055c-0203-4fe7-8801-560b9cdb27
bb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f,PodSandboxId:4cb8edf5989f6ac213d0b048567669885d32706e746baf9d96f03201eea66a51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727814135422490953,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9115d965fc4901e54c07a2cea5b4685d,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd,PodSandboxId:2670bc708b1763bcb44d536dc79e950560b46d7eed02b99a37f5b6fd7e6a6bc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727814135372060114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fecf9806c385262cee9f746f5ec0ae30,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d,PodSandboxId:42f37858d2731d544150520dfed9e3863f252ec6c1fc1fff71b0f33fe708d94a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814135363000750,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9039c1881a40941c5423b90636b917f0,},Annotations:map[string]string{io.kubernetes.container.hash: 12fa
acf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf,PodSandboxId:f808057f488899b902bf93f2185fcb395e721ac7fe24899f95011ae1d77f8b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727814135341544363,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e600911258e76c20c9684f3a9522644b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5e8fd84-c15c-48d5-b9a1-ff832cdaab00 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.850005407Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c67c4d1-e4a4-47f8-a013-34d38ab399d7 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.850091611Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c67c4d1-e4a4-47f8-a013-34d38ab399d7 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.851039577Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e96837d-da1c-4d7f-a26d-b724ede1b851 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.851453796Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815318851433110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e96837d-da1c-4d7f-a26d-b724ede1b851 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.851953938Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ffe0f324-6ec7-413e-b9f7-261f6c1a3aa3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.852018703Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ffe0f324-6ec7-413e-b9f7-261f6c1a3aa3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.852308164Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa,PodSandboxId:0a88665266cc2f08e215002897cb59c7a8c13f13b19399b21fde199aa9cd896a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814170931281786,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8832193a-39b4-49b9-b943-3241bb27fb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3981814eef46226993c1c4a4edb27e11c712d927d02d3108947611a0d4d6b389,PodSandboxId:5ffc250487ecf1179d6a16e31379ac9ab453100b694e252acc70d6597f920522,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727814151245737591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 815f5080-dfac-4639-8d4d-799975d8f0e1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6,PodSandboxId:6cd49ba952eafac891af87b63cb3223c25ee3f217375a043fafdae31bdbadb89,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814147774153625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g8jf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fbddef1-a564-4ee8-ab53-ae838d0fd984,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b,PodSandboxId:0a88665266cc2f08e215002897cb59c7a8c13f13b19399b21fde199aa9cd896a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727814140178687240,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
832193a-39b4-49b9-b943-3241bb27fb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8,PodSandboxId:dd90e7d68df5dfcc902fa373cc2aab0991248560d4a60f8989f9c31aee11c584,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727814140143266253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e25a055c-0203-4fe7-8801-560b9cdb27
bb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f,PodSandboxId:4cb8edf5989f6ac213d0b048567669885d32706e746baf9d96f03201eea66a51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727814135422490953,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9115d965fc4901e54c07a2cea5b4685d,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd,PodSandboxId:2670bc708b1763bcb44d536dc79e950560b46d7eed02b99a37f5b6fd7e6a6bc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727814135372060114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fecf9806c385262cee9f746f5ec0ae30,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d,PodSandboxId:42f37858d2731d544150520dfed9e3863f252ec6c1fc1fff71b0f33fe708d94a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814135363000750,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9039c1881a40941c5423b90636b917f0,},Annotations:map[string]string{io.kubernetes.container.hash: 12fa
acf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf,PodSandboxId:f808057f488899b902bf93f2185fcb395e721ac7fe24899f95011ae1d77f8b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727814135341544363,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e600911258e76c20c9684f3a9522644b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ffe0f324-6ec7-413e-b9f7-261f6c1a3aa3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.885332235Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=366d9a4e-6d8d-4b57-a191-c941db2f4f68 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.885660134Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5ffc250487ecf1179d6a16e31379ac9ab453100b694e252acc70d6597f920522,Metadata:&PodSandboxMetadata{Name:busybox,Uid:815f5080-dfac-4639-8d4d-799975d8f0e1,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727814147561559470,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 815f5080-dfac-4639-8d4d-799975d8f0e1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-01T20:22:19.680881451Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6cd49ba952eafac891af87b63cb3223c25ee3f217375a043fafdae31bdbadb89,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-g8jf8,Uid:7fbddef1-a564-4ee8-ab53-ae838d0fd984,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:17278141475565345
13,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-g8jf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fbddef1-a564-4ee8-ab53-ae838d0fd984,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-01T20:22:19.680884963Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d1697404c6988289f5eac4af62c40f6dcf2b68a94d65e7bbcaab1d7ba0446412,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-2rpwt,Uid:235515ab-28fc-437b-983a-243f7a8fb183,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727814145751825277,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-2rpwt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 235515ab-28fc-437b-983a-243f7a8fb183,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-01T20:22:19.6
80875028Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dd90e7d68df5dfcc902fa373cc2aab0991248560d4a60f8989f9c31aee11c584,Metadata:&PodSandboxMetadata{Name:kube-proxy-7rrkn,Uid:e25a055c-0203-4fe7-8801-560b9cdb27bb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727814140000771269,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7rrkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e25a055c-0203-4fe7-8801-560b9cdb27bb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-01T20:22:19.680872074Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0a88665266cc2f08e215002897cb59c7a8c13f13b19399b21fde199aa9cd896a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8832193a-39b4-49b9-b943-3241bb27fb8d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727814139998789495,Labels:map[string]
string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8832193a-39b4-49b9-b943-3241bb27fb8d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io
/config.seen: 2024-10-01T20:22:19.680883753Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2670bc708b1763bcb44d536dc79e950560b46d7eed02b99a37f5b6fd7e6a6bc8,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-262337,Uid:fecf9806c385262cee9f746f5ec0ae30,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727814135185047583,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fecf9806c385262cee9f746f5ec0ae30,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.93:8443,kubernetes.io/config.hash: fecf9806c385262cee9f746f5ec0ae30,kubernetes.io/config.seen: 2024-10-01T20:22:14.672245600Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f808057f488899b902bf93f2185fcb395e721ac7fe24899f95011ae1d77f8b68,Metadata:&PodSandboxMetadata{Na
me:kube-controller-manager-no-preload-262337,Uid:e600911258e76c20c9684f3a9522644b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727814135176210001,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e600911258e76c20c9684f3a9522644b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e600911258e76c20c9684f3a9522644b,kubernetes.io/config.seen: 2024-10-01T20:22:14.672246904Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:42f37858d2731d544150520dfed9e3863f252ec6c1fc1fff71b0f33fe708d94a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-262337,Uid:9039c1881a40941c5423b90636b917f0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727814135170719339,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name:
kube-scheduler-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9039c1881a40941c5423b90636b917f0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9039c1881a40941c5423b90636b917f0,kubernetes.io/config.seen: 2024-10-01T20:22:14.672240856Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4cb8edf5989f6ac213d0b048567669885d32706e746baf9d96f03201eea66a51,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-262337,Uid:9115d965fc4901e54c07a2cea5b4685d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727814135169474214,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9115d965fc4901e54c07a2cea5b4685d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.93:2379,kubernetes.io/config.hash: 9115d965fc4901e54c07a2cea5b4685d,kube
rnetes.io/config.seen: 2024-10-01T20:22:14.715254164Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=366d9a4e-6d8d-4b57-a191-c941db2f4f68 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.886434797Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce8404b6-e6d6-4e34-82dc-64313148f1fc name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.886520136Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce8404b6-e6d6-4e34-82dc-64313148f1fc name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.886788384Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa,PodSandboxId:0a88665266cc2f08e215002897cb59c7a8c13f13b19399b21fde199aa9cd896a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814170931281786,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8832193a-39b4-49b9-b943-3241bb27fb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3981814eef46226993c1c4a4edb27e11c712d927d02d3108947611a0d4d6b389,PodSandboxId:5ffc250487ecf1179d6a16e31379ac9ab453100b694e252acc70d6597f920522,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727814151245737591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 815f5080-dfac-4639-8d4d-799975d8f0e1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6,PodSandboxId:6cd49ba952eafac891af87b63cb3223c25ee3f217375a043fafdae31bdbadb89,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814147774153625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g8jf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fbddef1-a564-4ee8-ab53-ae838d0fd984,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b,PodSandboxId:0a88665266cc2f08e215002897cb59c7a8c13f13b19399b21fde199aa9cd896a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727814140178687240,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
832193a-39b4-49b9-b943-3241bb27fb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8,PodSandboxId:dd90e7d68df5dfcc902fa373cc2aab0991248560d4a60f8989f9c31aee11c584,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727814140143266253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e25a055c-0203-4fe7-8801-560b9cdb27
bb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f,PodSandboxId:4cb8edf5989f6ac213d0b048567669885d32706e746baf9d96f03201eea66a51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727814135422490953,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9115d965fc4901e54c07a2cea5b4685d,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd,PodSandboxId:2670bc708b1763bcb44d536dc79e950560b46d7eed02b99a37f5b6fd7e6a6bc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727814135372060114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fecf9806c385262cee9f746f5ec0ae30,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d,PodSandboxId:42f37858d2731d544150520dfed9e3863f252ec6c1fc1fff71b0f33fe708d94a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814135363000750,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9039c1881a40941c5423b90636b917f0,},Annotations:map[string]string{io.kubernetes.container.hash: 12fa
acf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf,PodSandboxId:f808057f488899b902bf93f2185fcb395e721ac7fe24899f95011ae1d77f8b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727814135341544363,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e600911258e76c20c9684f3a9522644b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ce8404b6-e6d6-4e34-82dc-64313148f1fc name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.892412982Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7739d145-049f-4769-8e09-8dded15ce1d1 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.892486707Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7739d145-049f-4769-8e09-8dded15ce1d1 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.893461286Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8beced2d-92a8-4e4b-86d8-e828424b2e6c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.893823864Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815318893799641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8beced2d-92a8-4e4b-86d8-e828424b2e6c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.894406540Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51b03165-252d-434b-8994-f9b96469a3b9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.894496181Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51b03165-252d-434b-8994-f9b96469a3b9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:41:58 no-preload-262337 crio[710]: time="2024-10-01 20:41:58.894839026Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa,PodSandboxId:0a88665266cc2f08e215002897cb59c7a8c13f13b19399b21fde199aa9cd896a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814170931281786,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8832193a-39b4-49b9-b943-3241bb27fb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3981814eef46226993c1c4a4edb27e11c712d927d02d3108947611a0d4d6b389,PodSandboxId:5ffc250487ecf1179d6a16e31379ac9ab453100b694e252acc70d6597f920522,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727814151245737591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 815f5080-dfac-4639-8d4d-799975d8f0e1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6,PodSandboxId:6cd49ba952eafac891af87b63cb3223c25ee3f217375a043fafdae31bdbadb89,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814147774153625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g8jf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fbddef1-a564-4ee8-ab53-ae838d0fd984,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b,PodSandboxId:0a88665266cc2f08e215002897cb59c7a8c13f13b19399b21fde199aa9cd896a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727814140178687240,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
832193a-39b4-49b9-b943-3241bb27fb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8,PodSandboxId:dd90e7d68df5dfcc902fa373cc2aab0991248560d4a60f8989f9c31aee11c584,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727814140143266253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e25a055c-0203-4fe7-8801-560b9cdb27
bb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f,PodSandboxId:4cb8edf5989f6ac213d0b048567669885d32706e746baf9d96f03201eea66a51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727814135422490953,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9115d965fc4901e54c07a2cea5b4685d,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd,PodSandboxId:2670bc708b1763bcb44d536dc79e950560b46d7eed02b99a37f5b6fd7e6a6bc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727814135372060114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fecf9806c385262cee9f746f5ec0ae30,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d,PodSandboxId:42f37858d2731d544150520dfed9e3863f252ec6c1fc1fff71b0f33fe708d94a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814135363000750,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9039c1881a40941c5423b90636b917f0,},Annotations:map[string]string{io.kubernetes.container.hash: 12fa
acf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf,PodSandboxId:f808057f488899b902bf93f2185fcb395e721ac7fe24899f95011ae1d77f8b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727814135341544363,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-262337,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e600911258e76c20c9684f3a9522644b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51b03165-252d-434b-8994-f9b96469a3b9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a5ae72bcebfe4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   0a88665266cc2       storage-provisioner
	3981814eef462       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   5ffc250487ecf       busybox
	4380c36f31b67       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      19 minutes ago      Running             coredns                   1                   6cd49ba952eaf       coredns-7c65d6cfc9-g8jf8
	652cab583d763       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   0a88665266cc2       storage-provisioner
	fc3552d19417a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      19 minutes ago      Running             kube-proxy                1                   dd90e7d68df5d       kube-proxy-7rrkn
	586d6feee0436       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Running             etcd                      1                   4cb8edf5989f6       etcd-no-preload-262337
	a64415a2dee8b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      19 minutes ago      Running             kube-apiserver            1                   2670bc708b176       kube-apiserver-no-preload-262337
	89f0e3dd97e8a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      19 minutes ago      Running             kube-scheduler            1                   42f37858d2731       kube-scheduler-no-preload-262337
	69adf90addf5f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      19 minutes ago      Running             kube-controller-manager   1                   f808057f48889       kube-controller-manager-no-preload-262337
	
	
	==> coredns [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44278 - 21795 "HINFO IN 2150363184310238732.2692046288970790068. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013592862s
	
	
	==> describe nodes <==
	Name:               no-preload-262337
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-262337
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=no-preload-262337
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T20_12_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 20:12:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-262337
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 20:41:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 20:38:09 +0000   Tue, 01 Oct 2024 20:12:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 20:38:09 +0000   Tue, 01 Oct 2024 20:12:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 20:38:09 +0000   Tue, 01 Oct 2024 20:12:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 20:38:09 +0000   Tue, 01 Oct 2024 20:22:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.93
	  Hostname:    no-preload-262337
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2b245fbd1b7b4233923322e30b8c6875
	  System UUID:                2b245fbd-1b7b-4233-9233-22e30b8c6875
	  Boot ID:                    1550a445-c7c9-4305-8e82-dff1255f4b52
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-g8jf8                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-no-preload-262337                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-no-preload-262337             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-no-preload-262337    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-7rrkn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-no-preload-262337             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-2rpwt              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node no-preload-262337 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node no-preload-262337 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node no-preload-262337 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node no-preload-262337 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node no-preload-262337 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node no-preload-262337 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node no-preload-262337 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node no-preload-262337 event: Registered Node no-preload-262337 in Controller
	  Normal  CIDRAssignmentFailed     29m                cidrAllocator    Node no-preload-262337 status is now: CIDRAssignmentFailed
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-262337 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-262337 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-262337 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-262337 event: Registered Node no-preload-262337 in Controller
	
	
	==> dmesg <==
	[Oct 1 20:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054207] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040966] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.074423] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.998053] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.538389] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.802449] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.059617] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065206] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.169845] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.122284] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.287549] systemd-fstab-generator[702]: Ignoring "noauto" option for root device
	[Oct 1 20:22] systemd-fstab-generator[1241]: Ignoring "noauto" option for root device
	[  +0.059109] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.018009] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +3.333709] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.257438] systemd-fstab-generator[1982]: Ignoring "noauto" option for root device
	[  +3.711729] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.521799] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f] <==
	{"level":"info","ts":"2024-10-01T20:22:17.465841Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6e2c9029caadaa became leader at term 3"}
	{"level":"info","ts":"2024-10-01T20:22:17.465866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4e6e2c9029caadaa elected leader 4e6e2c9029caadaa at term 3"}
	{"level":"info","ts":"2024-10-01T20:22:17.467383Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4e6e2c9029caadaa","local-member-attributes":"{Name:no-preload-262337 ClientURLs:[https://192.168.61.93:2379]}","request-path":"/0/members/4e6e2c9029caadaa/attributes","cluster-id":"4a4285095021b5a3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-01T20:22:17.467443Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T20:22:17.467623Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T20:22:17.468005Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-01T20:22:17.468034Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-01T20:22:17.468737Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T20:22:17.469472Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T20:22:17.469549Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.93:2379"}
	{"level":"info","ts":"2024-10-01T20:22:17.470337Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-01T20:32:17.499820Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":865}
	{"level":"info","ts":"2024-10-01T20:32:17.509482Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":865,"took":"9.279816ms","hash":505999604,"current-db-size-bytes":2740224,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2740224,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-10-01T20:32:17.509540Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":505999604,"revision":865,"compact-revision":-1}
	{"level":"info","ts":"2024-10-01T20:37:17.510579Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1108}
	{"level":"info","ts":"2024-10-01T20:37:17.514801Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1108,"took":"3.417749ms","hash":2857454796,"current-db-size-bytes":2740224,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1642496,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-01T20:37:17.514894Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2857454796,"revision":1108,"compact-revision":865}
	{"level":"info","ts":"2024-10-01T20:40:48.130267Z","caller":"traceutil/trace.go:171","msg":"trace[372977085] linearizableReadLoop","detail":"{readStateIndex:1783; appliedIndex:1782; }","duration":"188.804655ms","start":"2024-10-01T20:40:47.941429Z","end":"2024-10-01T20:40:48.130233Z","steps":["trace[372977085] 'read index received'  (duration: 188.499507ms)","trace[372977085] 'applied index is now lower than readState.Index'  (duration: 304.144µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-01T20:40:48.130520Z","caller":"traceutil/trace.go:171","msg":"trace[737111840] transaction","detail":"{read_only:false; response_revision:1522; number_of_response:1; }","duration":"304.668223ms","start":"2024-10-01T20:40:47.825835Z","end":"2024-10-01T20:40:48.130503Z","steps":["trace[737111840] 'process raft request'  (duration: 304.185336ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:40:48.130660Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.163903ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T20:40:48.130722Z","caller":"traceutil/trace.go:171","msg":"trace[1975583479] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1522; }","duration":"189.303291ms","start":"2024-10-01T20:40:47.941411Z","end":"2024-10-01T20:40:48.130714Z","steps":["trace[1975583479] 'agreement among raft nodes before linearized reading'  (duration: 189.143555ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:40:48.130958Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.932028ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T20:40:48.130999Z","caller":"traceutil/trace.go:171","msg":"trace[855788320] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:1522; }","duration":"162.988885ms","start":"2024-10-01T20:40:47.968003Z","end":"2024-10-01T20:40:48.130992Z","steps":["trace[855788320] 'agreement among raft nodes before linearized reading'  (duration: 162.913264ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:40:48.132372Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T20:40:47.825818Z","time spent":"304.87582ms","remote":"127.0.0.1:59326","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1521 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-10-01T20:41:44.532818Z","caller":"traceutil/trace.go:171","msg":"trace[1375708008] transaction","detail":"{read_only:false; response_revision:1568; number_of_response:1; }","duration":"103.393172ms","start":"2024-10-01T20:41:44.429395Z","end":"2024-10-01T20:41:44.532788Z","steps":["trace[1375708008] 'process raft request'  (duration: 103.258849ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:41:59 up 20 min,  0 users,  load average: 0.04, 0.20, 0.18
	Linux no-preload-262337 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1001 20:37:19.773799       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:37:19.773882       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1001 20:37:19.775042       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1001 20:37:19.775216       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1001 20:38:19.776108       1 handler_proxy.go:99] no RequestInfo found in the context
	W1001 20:38:19.776256       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:38:19.776361       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1001 20:38:19.776450       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1001 20:38:19.777581       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1001 20:38:19.777685       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1001 20:40:19.778379       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:40:19.778594       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1001 20:40:19.778678       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:40:19.778705       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1001 20:40:19.779793       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1001 20:40:19.779890       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf] <==
	E1001 20:36:52.441275       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:36:53.084255       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:37:22.447665       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:37:23.092955       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:37:52.452935       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:37:53.101094       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1001 20:38:09.091324       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-262337"
	E1001 20:38:22.459689       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:38:23.109182       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1001 20:38:33.760495       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="229.749µs"
	I1001 20:38:46.760925       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="140.268µs"
	E1001 20:38:52.467218       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:38:53.126232       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:39:22.473794       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:39:23.133676       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:39:52.480794       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:39:53.142251       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:40:22.489479       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:40:23.150564       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:40:52.497410       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:40:53.168605       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:41:22.504979       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:41:23.178998       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:41:52.511543       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:41:53.188352       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 20:22:20.521360       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 20:22:20.550791       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.93"]
	E1001 20:22:20.550991       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 20:22:20.619277       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 20:22:20.619371       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 20:22:20.619410       1 server_linux.go:169] "Using iptables Proxier"
	I1001 20:22:20.622114       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 20:22:20.623219       1 server.go:483] "Version info" version="v1.31.1"
	I1001 20:22:20.623289       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 20:22:20.626084       1 config.go:199] "Starting service config controller"
	I1001 20:22:20.626763       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 20:22:20.627025       1 config.go:105] "Starting endpoint slice config controller"
	I1001 20:22:20.627071       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 20:22:20.628413       1 config.go:328] "Starting node config controller"
	I1001 20:22:20.628443       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 20:22:20.728188       1 shared_informer.go:320] Caches are synced for service config
	I1001 20:22:20.728312       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 20:22:20.728549       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d] <==
	I1001 20:22:16.509384       1 serving.go:386] Generated self-signed cert in-memory
	W1001 20:22:18.734552       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1001 20:22:18.734645       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1001 20:22:18.734674       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1001 20:22:18.734698       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1001 20:22:18.782848       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1001 20:22:18.783224       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 20:22:18.792980       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 20:22:18.793078       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1001 20:22:18.793099       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1001 20:22:18.793248       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1001 20:22:18.894551       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 01 20:40:46 no-preload-262337 kubelet[1371]: E1001 20:40:46.742705    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2rpwt" podUID="235515ab-28fc-437b-983a-243f7a8fb183"
	Oct 01 20:40:55 no-preload-262337 kubelet[1371]: E1001 20:40:55.044784    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815255044062052,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:40:55 no-preload-262337 kubelet[1371]: E1001 20:40:55.045087    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815255044062052,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:41:01 no-preload-262337 kubelet[1371]: E1001 20:41:01.742999    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2rpwt" podUID="235515ab-28fc-437b-983a-243f7a8fb183"
	Oct 01 20:41:05 no-preload-262337 kubelet[1371]: E1001 20:41:05.047811    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815265047341864,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:41:05 no-preload-262337 kubelet[1371]: E1001 20:41:05.048447    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815265047341864,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:41:12 no-preload-262337 kubelet[1371]: E1001 20:41:12.743852    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2rpwt" podUID="235515ab-28fc-437b-983a-243f7a8fb183"
	Oct 01 20:41:14 no-preload-262337 kubelet[1371]: E1001 20:41:14.759576    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 20:41:14 no-preload-262337 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 20:41:14 no-preload-262337 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 20:41:14 no-preload-262337 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 20:41:14 no-preload-262337 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 20:41:15 no-preload-262337 kubelet[1371]: E1001 20:41:15.051038    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815275050279112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:41:15 no-preload-262337 kubelet[1371]: E1001 20:41:15.051069    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815275050279112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:41:25 no-preload-262337 kubelet[1371]: E1001 20:41:25.052982    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815285052576106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:41:25 no-preload-262337 kubelet[1371]: E1001 20:41:25.053421    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815285052576106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:41:26 no-preload-262337 kubelet[1371]: E1001 20:41:26.742554    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2rpwt" podUID="235515ab-28fc-437b-983a-243f7a8fb183"
	Oct 01 20:41:35 no-preload-262337 kubelet[1371]: E1001 20:41:35.055337    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815295054912394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:41:35 no-preload-262337 kubelet[1371]: E1001 20:41:35.055694    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815295054912394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:41:38 no-preload-262337 kubelet[1371]: E1001 20:41:38.743617    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2rpwt" podUID="235515ab-28fc-437b-983a-243f7a8fb183"
	Oct 01 20:41:45 no-preload-262337 kubelet[1371]: E1001 20:41:45.057855    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815305057372841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:41:45 no-preload-262337 kubelet[1371]: E1001 20:41:45.058384    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815305057372841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:41:51 no-preload-262337 kubelet[1371]: E1001 20:41:51.741980    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2rpwt" podUID="235515ab-28fc-437b-983a-243f7a8fb183"
	Oct 01 20:41:55 no-preload-262337 kubelet[1371]: E1001 20:41:55.060823    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815315060099312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:41:55 no-preload-262337 kubelet[1371]: E1001 20:41:55.061319    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815315060099312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b] <==
	I1001 20:22:20.317219       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1001 20:22:50.319942       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa] <==
	I1001 20:22:51.007033       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 20:22:51.019717       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 20:22:51.019797       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 20:23:08.420447       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 20:23:08.420680       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-262337_51785853-c7f5-43d5-a7af-4fd5eb81ccb8!
	I1001 20:23:08.422063       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"836cd95c-e80f-446d-a21e-bcc0177b8324", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-262337_51785853-c7f5-43d5-a7af-4fd5eb81ccb8 became leader
	I1001 20:23:08.521800       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-262337_51785853-c7f5-43d5-a7af-4fd5eb81ccb8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-262337 -n no-preload-262337
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-262337 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-2rpwt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-262337 describe pod metrics-server-6867b74b74-2rpwt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-262337 describe pod metrics-server-6867b74b74-2rpwt: exit status 1 (68.343757ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-2rpwt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-262337 describe pod metrics-server-6867b74b74-2rpwt: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (370.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (93.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
E1001 20:40:02.100312   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.110:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.110:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-359369 -n old-k8s-version-359369
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-359369 -n old-k8s-version-359369: exit status 2 (231.767127ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-359369" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-359369 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-359369 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.789µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-359369 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-359369 -n old-k8s-version-359369
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-359369 -n old-k8s-version-359369: exit status 2 (229.801079ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-359369 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-402897                              | cert-expiration-402897       | jenkins | v1.34.0 | 01 Oct 24 20:12 UTC | 01 Oct 24 20:12 UTC |
	| start   | -p embed-certs-106982                                  | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:12 UTC | 01 Oct 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-262337             | no-preload-262337            | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC | 01 Oct 24 20:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-262337                                   | no-preload-262337            | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-106982            | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC | 01 Oct 24 20:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-106982                                  | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:14 UTC | 01 Oct 24 20:14 UTC |
	| start   | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:14 UTC | 01 Oct 24 20:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC | 01 Oct 24 20:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-359369        | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-262337                  | no-preload-262337            | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-262337                                   | no-preload-262337            | jenkins | v1.34.0 | 01 Oct 24 20:15 UTC | 01 Oct 24 20:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-106982                 | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-869396                           | kubernetes-upgrade-869396    | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-556200 | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:16 UTC |
	|         | disable-driver-mounts-556200                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:21 UTC |
	|         | default-k8s-diff-port-878552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-106982                                  | embed-certs-106982           | jenkins | v1.34.0 | 01 Oct 24 20:16 UTC | 01 Oct 24 20:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-359369                              | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:17 UTC | 01 Oct 24 20:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-359369             | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:17 UTC | 01 Oct 24 20:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-359369                              | old-k8s-version-359369       | jenkins | v1.34.0 | 01 Oct 24 20:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-878552  | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:22 UTC | 01 Oct 24 20:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:22 UTC |                     |
	|         | default-k8s-diff-port-878552                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-878552       | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-878552 | jenkins | v1.34.0 | 01 Oct 24 20:24 UTC | 01 Oct 24 20:34 UTC |
	|         | default-k8s-diff-port-878552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 20:24:40
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 20:24:40.832961   68418 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:24:40.833061   68418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:24:40.833066   68418 out.go:358] Setting ErrFile to fd 2...
	I1001 20:24:40.833070   68418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:24:40.833265   68418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 20:24:40.833818   68418 out.go:352] Setting JSON to false
	I1001 20:24:40.834796   68418 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7623,"bootTime":1727806658,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 20:24:40.834894   68418 start.go:139] virtualization: kvm guest
	I1001 20:24:40.837148   68418 out.go:177] * [default-k8s-diff-port-878552] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 20:24:40.838511   68418 notify.go:220] Checking for updates...
	I1001 20:24:40.838551   68418 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 20:24:40.839938   68418 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 20:24:40.841161   68418 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:24:40.842268   68418 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:24:40.843373   68418 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 20:24:40.844538   68418 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 20:24:40.846141   68418 config.go:182] Loaded profile config "default-k8s-diff-port-878552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:24:40.846513   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:24:40.846561   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:24:40.862168   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42661
	I1001 20:24:40.862628   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:24:40.863294   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:24:40.863326   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:24:40.863699   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:24:40.863903   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:24:40.864180   68418 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 20:24:40.864548   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:24:40.864620   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:24:40.880173   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45711
	I1001 20:24:40.880719   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:24:40.881220   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:24:40.881245   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:24:40.881581   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:24:40.881795   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:24:40.920802   68418 out.go:177] * Using the kvm2 driver based on existing profile
	I1001 20:24:40.921986   68418 start.go:297] selected driver: kvm2
	I1001 20:24:40.921999   68418 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-878552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-878552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.4 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:24:40.922122   68418 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 20:24:40.922802   68418 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:24:40.922895   68418 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 20:24:40.938386   68418 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 20:24:40.938811   68418 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:24:40.938841   68418 cni.go:84] Creating CNI manager for ""
	I1001 20:24:40.938880   68418 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:24:40.938931   68418 start.go:340] cluster config:
	{Name:default-k8s-diff-port-878552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-878552 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.4 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:24:40.939036   68418 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:24:40.940656   68418 out.go:177] * Starting "default-k8s-diff-port-878552" primary control-plane node in "default-k8s-diff-port-878552" cluster
	I1001 20:24:40.941946   68418 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 20:24:40.942006   68418 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 20:24:40.942023   68418 cache.go:56] Caching tarball of preloaded images
	I1001 20:24:40.942155   68418 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 20:24:40.942166   68418 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 20:24:40.942298   68418 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/config.json ...
	I1001 20:24:40.942537   68418 start.go:360] acquireMachinesLock for default-k8s-diff-port-878552: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 20:24:40.942581   68418 start.go:364] duration metric: took 24.859µs to acquireMachinesLock for "default-k8s-diff-port-878552"
	I1001 20:24:40.942601   68418 start.go:96] Skipping create...Using existing machine configuration
	I1001 20:24:40.942608   68418 fix.go:54] fixHost starting: 
	I1001 20:24:40.942921   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:24:40.942954   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:24:40.958447   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36711
	I1001 20:24:40.958976   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:24:40.960190   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:24:40.960223   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:24:40.960575   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:24:40.960770   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:24:40.960921   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:24:40.962765   68418 fix.go:112] recreateIfNeeded on default-k8s-diff-port-878552: state=Running err=<nil>
	W1001 20:24:40.962786   68418 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 20:24:40.964520   68418 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-878552" VM ...
	I1001 20:24:37.763268   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:40.262669   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:39.025570   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:39.040932   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:39.041011   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:39.076620   65592 cri.go:89] found id: ""
	I1001 20:24:39.076649   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.076659   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:39.076666   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:39.076734   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:39.113395   65592 cri.go:89] found id: ""
	I1001 20:24:39.113422   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.113430   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:39.113436   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:39.113490   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:39.147839   65592 cri.go:89] found id: ""
	I1001 20:24:39.147877   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.147890   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:39.147899   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:39.147966   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:39.179721   65592 cri.go:89] found id: ""
	I1001 20:24:39.179758   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.179769   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:39.179777   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:39.179842   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:39.211511   65592 cri.go:89] found id: ""
	I1001 20:24:39.211541   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.211549   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:39.211554   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:39.211603   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:39.243517   65592 cri.go:89] found id: ""
	I1001 20:24:39.243544   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.243552   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:39.243557   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:39.243623   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:39.276159   65592 cri.go:89] found id: ""
	I1001 20:24:39.276182   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.276189   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:39.276195   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:39.276239   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:39.307242   65592 cri.go:89] found id: ""
	I1001 20:24:39.307274   65592 logs.go:276] 0 containers: []
	W1001 20:24:39.307285   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:39.307295   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:39.307307   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:39.387442   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:39.387486   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:39.423123   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:39.423156   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:39.474648   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:39.474686   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:39.488129   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:39.488158   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:39.557478   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:42.058114   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:42.071979   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:42.072056   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:42.110529   65592 cri.go:89] found id: ""
	I1001 20:24:42.110557   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.110565   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:42.110570   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:42.110619   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:42.145408   65592 cri.go:89] found id: ""
	I1001 20:24:42.145436   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.145445   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:42.145450   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:42.145509   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:42.180602   65592 cri.go:89] found id: ""
	I1001 20:24:42.180641   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.180655   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:42.180664   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:42.180722   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:38.119187   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:40.619080   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:40.965599   68418 machine.go:93] provisionDockerMachine start ...
	I1001 20:24:40.965619   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:24:40.965852   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:24:40.968710   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:24:40.969253   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:20:43 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:24:40.969286   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:24:40.969517   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:24:40.969724   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:24:40.969960   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:24:40.970112   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:24:40.970316   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:24:40.970570   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:24:40.970584   68418 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 20:24:43.860755   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:24:42.262933   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:44.762857   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:42.214116   65592 cri.go:89] found id: ""
	I1001 20:24:42.214148   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.214160   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:42.214168   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:42.214224   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:42.246785   65592 cri.go:89] found id: ""
	I1001 20:24:42.246814   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.246825   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:42.246832   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:42.246900   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:42.281586   65592 cri.go:89] found id: ""
	I1001 20:24:42.281633   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.281645   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:42.281660   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:42.281724   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:42.318982   65592 cri.go:89] found id: ""
	I1001 20:24:42.319015   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.319025   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:42.319032   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:42.319085   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:42.350592   65592 cri.go:89] found id: ""
	I1001 20:24:42.350619   65592 logs.go:276] 0 containers: []
	W1001 20:24:42.350638   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:42.350646   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:42.350659   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:42.429111   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:42.429152   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:42.466741   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:42.466775   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:42.516829   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:42.516870   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:42.530174   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:42.530201   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:42.600444   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:45.101469   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:45.113821   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:45.113904   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:45.148105   65592 cri.go:89] found id: ""
	I1001 20:24:45.148132   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.148146   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:45.148152   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:45.148196   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:45.180980   65592 cri.go:89] found id: ""
	I1001 20:24:45.181012   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.181027   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:45.181046   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:45.181113   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:45.216971   65592 cri.go:89] found id: ""
	I1001 20:24:45.217001   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.217010   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:45.217015   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:45.217060   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:45.252240   65592 cri.go:89] found id: ""
	I1001 20:24:45.252275   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.252287   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:45.252294   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:45.252354   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:45.287389   65592 cri.go:89] found id: ""
	I1001 20:24:45.287419   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.287434   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:45.287440   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:45.287501   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:45.319980   65592 cri.go:89] found id: ""
	I1001 20:24:45.320015   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.320027   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:45.320035   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:45.320101   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:45.351894   65592 cri.go:89] found id: ""
	I1001 20:24:45.351920   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.351931   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:45.351936   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:45.351984   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:45.385370   65592 cri.go:89] found id: ""
	I1001 20:24:45.385400   65592 logs.go:276] 0 containers: []
	W1001 20:24:45.385412   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:45.385423   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:45.385485   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:45.449558   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:45.449584   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:45.449596   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:45.524322   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:45.524372   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:45.560729   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:45.560757   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:45.614098   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:45.614139   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:43.119614   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:45.121666   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:47.618362   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:46.932587   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:24:47.263384   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:49.761472   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:48.129944   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:48.143420   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:48.143496   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:48.175627   65592 cri.go:89] found id: ""
	I1001 20:24:48.175668   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.175682   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:48.175689   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:48.175747   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:48.210422   65592 cri.go:89] found id: ""
	I1001 20:24:48.210451   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.210462   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:48.210470   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:48.210535   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:48.243916   65592 cri.go:89] found id: ""
	I1001 20:24:48.243952   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.243963   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:48.243972   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:48.244027   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:48.275802   65592 cri.go:89] found id: ""
	I1001 20:24:48.275830   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.275845   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:48.275857   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:48.275917   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:48.311539   65592 cri.go:89] found id: ""
	I1001 20:24:48.311569   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.311579   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:48.311586   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:48.311648   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:48.342606   65592 cri.go:89] found id: ""
	I1001 20:24:48.342646   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.342658   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:48.342666   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:48.342718   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:48.375554   65592 cri.go:89] found id: ""
	I1001 20:24:48.375581   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.375591   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:48.375597   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:48.375642   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:48.407747   65592 cri.go:89] found id: ""
	I1001 20:24:48.407776   65592 logs.go:276] 0 containers: []
	W1001 20:24:48.407789   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:48.407800   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:48.407814   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:48.457470   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:48.457503   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:48.470483   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:48.470517   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:48.533536   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:48.533565   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:48.533580   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:48.614530   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:48.614571   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:51.157091   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:51.170292   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:51.170364   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:51.203784   65592 cri.go:89] found id: ""
	I1001 20:24:51.203809   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.203822   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:51.203828   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:51.203917   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:51.239789   65592 cri.go:89] found id: ""
	I1001 20:24:51.239826   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.239834   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:51.239840   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:51.239889   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:51.274562   65592 cri.go:89] found id: ""
	I1001 20:24:51.274595   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.274607   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:51.274617   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:51.274701   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:51.306172   65592 cri.go:89] found id: ""
	I1001 20:24:51.306199   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.306207   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:51.306213   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:51.306269   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:51.339631   65592 cri.go:89] found id: ""
	I1001 20:24:51.339660   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.339668   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:51.339674   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:51.339725   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:51.372128   65592 cri.go:89] found id: ""
	I1001 20:24:51.372154   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.372163   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:51.372169   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:51.372223   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:51.403790   65592 cri.go:89] found id: ""
	I1001 20:24:51.403818   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.403828   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:51.403842   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:51.403890   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:51.437771   65592 cri.go:89] found id: ""
	I1001 20:24:51.437799   65592 logs.go:276] 0 containers: []
	W1001 20:24:51.437808   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:51.437816   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:51.437827   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:51.489824   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:51.489864   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:51.503478   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:51.503508   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:51.573741   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:51.573768   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:51.573780   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:51.662355   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:51.662391   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:49.618685   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:51.619186   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:53.012639   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:24:51.761853   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:53.762442   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:56.261818   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:54.199747   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:54.212731   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:54.212797   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:54.244554   65592 cri.go:89] found id: ""
	I1001 20:24:54.244586   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.244596   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:54.244602   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:54.244652   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:54.280636   65592 cri.go:89] found id: ""
	I1001 20:24:54.280667   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.280679   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:54.280686   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:54.280737   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:54.318213   65592 cri.go:89] found id: ""
	I1001 20:24:54.318246   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.318257   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:54.318265   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:54.318321   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:54.353563   65592 cri.go:89] found id: ""
	I1001 20:24:54.353595   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.353606   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:54.353615   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:54.353678   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:54.387770   65592 cri.go:89] found id: ""
	I1001 20:24:54.387795   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.387803   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:54.387809   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:54.387869   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:54.421289   65592 cri.go:89] found id: ""
	I1001 20:24:54.421317   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.421325   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:54.421332   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:54.421382   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:54.456221   65592 cri.go:89] found id: ""
	I1001 20:24:54.456261   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.456274   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:54.456282   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:54.456348   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:54.488174   65592 cri.go:89] found id: ""
	I1001 20:24:54.488208   65592 logs.go:276] 0 containers: []
	W1001 20:24:54.488219   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:54.488228   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:54.488241   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:54.540981   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:54.541020   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:24:54.554099   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:54.554129   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:54.623978   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:54.624013   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:54.624034   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:54.704703   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:54.704738   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:54.119129   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:56.619282   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:56.088698   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:24:58.262173   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:00.761865   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:24:57.241791   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:24:57.254771   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:24:57.254843   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:24:57.290226   65592 cri.go:89] found id: ""
	I1001 20:24:57.290263   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.290271   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:24:57.290277   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:24:57.290336   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:24:57.324910   65592 cri.go:89] found id: ""
	I1001 20:24:57.324938   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.324946   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:24:57.324951   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:24:57.325068   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:24:57.360553   65592 cri.go:89] found id: ""
	I1001 20:24:57.360586   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.360601   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:24:57.360608   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:24:57.360669   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:24:57.395182   65592 cri.go:89] found id: ""
	I1001 20:24:57.395216   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.395229   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:24:57.395236   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:24:57.395296   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:24:57.428967   65592 cri.go:89] found id: ""
	I1001 20:24:57.428998   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.429011   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:24:57.429017   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:24:57.429072   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:24:57.462483   65592 cri.go:89] found id: ""
	I1001 20:24:57.462511   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.462519   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:24:57.462525   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:24:57.462581   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:24:57.495505   65592 cri.go:89] found id: ""
	I1001 20:24:57.495538   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.495550   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:24:57.495556   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:24:57.495615   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:24:57.528132   65592 cri.go:89] found id: ""
	I1001 20:24:57.528164   65592 logs.go:276] 0 containers: []
	W1001 20:24:57.528176   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:24:57.528188   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:24:57.528203   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:24:57.596557   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:24:57.596583   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:24:57.596598   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:24:57.676797   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:24:57.676830   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:57.714624   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:24:57.714653   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:24:57.763801   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:24:57.763839   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:00.277808   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:00.291432   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:00.291489   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:00.327524   65592 cri.go:89] found id: ""
	I1001 20:25:00.327554   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.327562   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:00.327568   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:00.327618   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:00.364125   65592 cri.go:89] found id: ""
	I1001 20:25:00.364153   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.364162   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:00.364167   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:00.364229   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:00.404507   65592 cri.go:89] found id: ""
	I1001 20:25:00.404543   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.404555   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:00.404564   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:00.404770   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:00.438761   65592 cri.go:89] found id: ""
	I1001 20:25:00.438792   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.438800   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:00.438807   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:00.438862   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:00.473263   65592 cri.go:89] found id: ""
	I1001 20:25:00.473301   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.473313   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:00.473321   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:00.473391   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:00.510276   65592 cri.go:89] found id: ""
	I1001 20:25:00.510307   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.510317   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:00.510324   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:00.510383   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:00.545118   65592 cri.go:89] found id: ""
	I1001 20:25:00.545149   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.545165   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:00.545173   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:00.545229   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:00.577773   65592 cri.go:89] found id: ""
	I1001 20:25:00.577799   65592 logs.go:276] 0 containers: []
	W1001 20:25:00.577810   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:00.577821   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:00.577835   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:00.628978   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:00.629012   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:00.642192   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:00.642225   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:00.711399   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:00.711432   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:00.711446   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:00.792477   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:00.792514   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:24:59.118041   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:01.119565   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:02.164636   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:05.236638   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:02.762323   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:04.764910   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:03.332492   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:03.347542   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:03.347622   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:03.388263   65592 cri.go:89] found id: ""
	I1001 20:25:03.388292   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.388300   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:03.388306   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:03.388353   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:03.421489   65592 cri.go:89] found id: ""
	I1001 20:25:03.421525   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.421534   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:03.421539   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:03.421634   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:03.457139   65592 cri.go:89] found id: ""
	I1001 20:25:03.457172   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.457182   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:03.457189   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:03.457251   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:03.497203   65592 cri.go:89] found id: ""
	I1001 20:25:03.497232   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.497241   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:03.497247   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:03.497313   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:03.535137   65592 cri.go:89] found id: ""
	I1001 20:25:03.535163   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.535171   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:03.535176   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:03.535221   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:03.569131   65592 cri.go:89] found id: ""
	I1001 20:25:03.569158   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.569166   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:03.569171   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:03.569217   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:03.605289   65592 cri.go:89] found id: ""
	I1001 20:25:03.605321   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.605329   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:03.605336   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:03.605389   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:03.651086   65592 cri.go:89] found id: ""
	I1001 20:25:03.651115   65592 logs.go:276] 0 containers: []
	W1001 20:25:03.651123   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:03.651134   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:03.651145   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:03.731256   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:03.731281   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:03.731299   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:03.809393   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:03.809442   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:03.849171   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:03.849198   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:03.898009   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:03.898045   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:06.411962   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:06.425432   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:06.425513   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:06.463339   65592 cri.go:89] found id: ""
	I1001 20:25:06.463371   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.463383   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:06.463391   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:06.463455   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:06.502527   65592 cri.go:89] found id: ""
	I1001 20:25:06.502561   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.502569   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:06.502611   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:06.502687   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:06.547428   65592 cri.go:89] found id: ""
	I1001 20:25:06.547465   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.547474   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:06.547480   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:06.547539   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:06.581672   65592 cri.go:89] found id: ""
	I1001 20:25:06.581699   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.581708   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:06.581713   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:06.581769   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:06.615391   65592 cri.go:89] found id: ""
	I1001 20:25:06.615436   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.615449   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:06.615457   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:06.615525   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:06.651019   65592 cri.go:89] found id: ""
	I1001 20:25:06.651050   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.651060   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:06.651067   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:06.651142   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:06.687887   65592 cri.go:89] found id: ""
	I1001 20:25:06.687912   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.687922   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:06.687929   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:06.687982   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:06.729234   65592 cri.go:89] found id: ""
	I1001 20:25:06.729263   65592 logs.go:276] 0 containers: []
	W1001 20:25:06.729273   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:06.729282   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:06.729296   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:06.747295   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:06.747326   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:06.816480   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:06.816511   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:06.816524   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:06.896918   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:06.896957   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:06.938922   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:06.938958   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:03.619205   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:06.118575   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:06.765214   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:09.261806   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:11.262162   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:09.494252   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:09.508085   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:09.508171   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:09.542999   65592 cri.go:89] found id: ""
	I1001 20:25:09.543029   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.543037   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:09.543043   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:09.543100   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:09.578112   65592 cri.go:89] found id: ""
	I1001 20:25:09.578137   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.578145   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:09.578150   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:09.578199   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:09.613123   65592 cri.go:89] found id: ""
	I1001 20:25:09.613150   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.613158   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:09.613166   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:09.613223   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:09.648172   65592 cri.go:89] found id: ""
	I1001 20:25:09.648214   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.648223   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:09.648230   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:09.648302   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:09.681217   65592 cri.go:89] found id: ""
	I1001 20:25:09.681244   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.681254   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:09.681261   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:09.681320   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:09.718166   65592 cri.go:89] found id: ""
	I1001 20:25:09.718196   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.718204   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:09.718212   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:09.718272   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:09.751910   65592 cri.go:89] found id: ""
	I1001 20:25:09.751942   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.751951   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:09.751956   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:09.752004   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:09.789213   65592 cri.go:89] found id: ""
	I1001 20:25:09.789237   65592 logs.go:276] 0 containers: []
	W1001 20:25:09.789246   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:09.789254   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:09.789265   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:09.826746   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:09.826780   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:09.879079   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:09.879123   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:09.892480   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:09.892507   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:09.967048   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:09.967084   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:09.967103   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:08.118822   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:10.120018   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:12.620582   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:14.356624   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:13.262286   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:15.263349   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:12.545057   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:12.557888   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:12.557969   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:12.594881   65592 cri.go:89] found id: ""
	I1001 20:25:12.594928   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.594942   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:12.594952   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:12.595021   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:12.631393   65592 cri.go:89] found id: ""
	I1001 20:25:12.631425   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.631437   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:12.631445   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:12.631504   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:12.666442   65592 cri.go:89] found id: ""
	I1001 20:25:12.666476   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.666486   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:12.666493   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:12.666548   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:12.703321   65592 cri.go:89] found id: ""
	I1001 20:25:12.703359   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.703371   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:12.703379   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:12.703444   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:12.742188   65592 cri.go:89] found id: ""
	I1001 20:25:12.742216   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.742224   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:12.742230   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:12.742276   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:12.781829   65592 cri.go:89] found id: ""
	I1001 20:25:12.781859   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.781869   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:12.781876   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:12.781940   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:12.815368   65592 cri.go:89] found id: ""
	I1001 20:25:12.815397   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.815405   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:12.815411   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:12.815463   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:12.850913   65592 cri.go:89] found id: ""
	I1001 20:25:12.850941   65592 logs.go:276] 0 containers: []
	W1001 20:25:12.850949   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:12.850958   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:12.850968   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:12.901409   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:12.901443   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:12.914517   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:12.914567   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:12.980086   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:12.980119   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:12.980135   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:13.055950   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:13.055989   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:15.595692   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:15.609648   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:15.609728   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:15.645477   65592 cri.go:89] found id: ""
	I1001 20:25:15.645502   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.645510   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:15.645514   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:15.645558   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:15.679674   65592 cri.go:89] found id: ""
	I1001 20:25:15.679702   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.679711   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:15.679717   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:15.679774   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:15.718057   65592 cri.go:89] found id: ""
	I1001 20:25:15.718082   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.718092   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:15.718097   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:15.718153   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:15.754094   65592 cri.go:89] found id: ""
	I1001 20:25:15.754121   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.754130   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:15.754136   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:15.754189   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:15.790415   65592 cri.go:89] found id: ""
	I1001 20:25:15.790450   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.790464   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:15.790472   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:15.790535   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:15.825603   65592 cri.go:89] found id: ""
	I1001 20:25:15.825630   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.825645   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:15.825653   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:15.825717   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:15.861330   65592 cri.go:89] found id: ""
	I1001 20:25:15.861356   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.861368   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:15.861375   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:15.861451   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:15.897534   65592 cri.go:89] found id: ""
	I1001 20:25:15.897564   65592 logs.go:276] 0 containers: []
	W1001 20:25:15.897575   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:15.897584   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:15.897598   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:15.972842   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:15.972881   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:16.010625   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:16.010653   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:16.062717   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:16.062762   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:16.076538   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:16.076568   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:16.156886   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:15.118878   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:17.119791   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:17.428649   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:17.764089   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:20.261752   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:18.657436   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:18.673018   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:18.673093   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:18.708040   65592 cri.go:89] found id: ""
	I1001 20:25:18.708078   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.708091   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:18.708100   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:18.708167   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:18.740152   65592 cri.go:89] found id: ""
	I1001 20:25:18.740188   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.740200   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:18.740207   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:18.740264   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:18.778238   65592 cri.go:89] found id: ""
	I1001 20:25:18.778270   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.778279   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:18.778287   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:18.778351   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:18.815450   65592 cri.go:89] found id: ""
	I1001 20:25:18.815489   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.815503   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:18.815512   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:18.815576   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:18.850008   65592 cri.go:89] found id: ""
	I1001 20:25:18.850038   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.850047   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:18.850053   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:18.850104   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:18.890919   65592 cri.go:89] found id: ""
	I1001 20:25:18.890943   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.890951   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:18.890957   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:18.891004   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:18.934196   65592 cri.go:89] found id: ""
	I1001 20:25:18.934228   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.934240   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:18.934247   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:18.934307   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:18.977817   65592 cri.go:89] found id: ""
	I1001 20:25:18.977850   65592 logs.go:276] 0 containers: []
	W1001 20:25:18.977862   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:18.977875   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:18.977889   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:19.039867   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:19.039910   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:19.054277   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:19.054310   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:19.125736   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:19.125765   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:19.125782   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:19.208588   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:19.208622   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:21.750881   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:21.766638   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:21.766712   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:21.801906   65592 cri.go:89] found id: ""
	I1001 20:25:21.801930   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.801938   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:21.801944   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:21.801990   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:21.842801   65592 cri.go:89] found id: ""
	I1001 20:25:21.842830   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.842844   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:21.842852   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:21.842917   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:21.876550   65592 cri.go:89] found id: ""
	I1001 20:25:21.876577   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.876588   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:21.876594   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:21.876647   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:21.910972   65592 cri.go:89] found id: ""
	I1001 20:25:21.911007   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.911016   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:21.911022   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:21.911098   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:21.945721   65592 cri.go:89] found id: ""
	I1001 20:25:21.945753   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.945765   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:21.945773   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:21.945833   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:21.982101   65592 cri.go:89] found id: ""
	I1001 20:25:21.982131   65592 logs.go:276] 0 containers: []
	W1001 20:25:21.982143   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:21.982151   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:21.982242   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:22.016526   65592 cri.go:89] found id: ""
	I1001 20:25:22.016558   65592 logs.go:276] 0 containers: []
	W1001 20:25:22.016569   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:22.016577   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:22.016632   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:22.054792   65592 cri.go:89] found id: ""
	I1001 20:25:22.054822   65592 logs.go:276] 0 containers: []
	W1001 20:25:22.054833   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:22.054844   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:22.054863   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:22.105936   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:22.105974   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:22.120834   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:22.120858   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:22.195177   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:22.195211   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:22.195228   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:19.120304   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:21.618511   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:23.512698   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:22.264134   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:24.762355   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:22.281244   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:22.281285   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:24.824197   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:24.840967   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:24.841030   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:24.882399   65592 cri.go:89] found id: ""
	I1001 20:25:24.882429   65592 logs.go:276] 0 containers: []
	W1001 20:25:24.882443   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:24.882449   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:24.882497   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:24.935548   65592 cri.go:89] found id: ""
	I1001 20:25:24.935581   65592 logs.go:276] 0 containers: []
	W1001 20:25:24.935590   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:24.935596   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:24.935644   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:24.976931   65592 cri.go:89] found id: ""
	I1001 20:25:24.976958   65592 logs.go:276] 0 containers: []
	W1001 20:25:24.976969   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:24.976976   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:24.977035   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:25.009926   65592 cri.go:89] found id: ""
	I1001 20:25:25.009959   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.009968   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:25.009975   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:25.010039   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:25.043261   65592 cri.go:89] found id: ""
	I1001 20:25:25.043299   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.043310   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:25.043316   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:25.043377   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:25.075177   65592 cri.go:89] found id: ""
	I1001 20:25:25.075205   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.075214   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:25.075221   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:25.075267   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:25.109792   65592 cri.go:89] found id: ""
	I1001 20:25:25.109832   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.109845   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:25.109871   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:25.109942   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:25.148721   65592 cri.go:89] found id: ""
	I1001 20:25:25.148753   65592 logs.go:276] 0 containers: []
	W1001 20:25:25.148763   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:25.148772   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:25.148790   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:25.161802   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:25.161841   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:25.227699   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:25.227732   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:25.227750   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:25.314028   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:25.314075   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:25.354881   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:25.354919   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:23.618792   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:26.118493   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:26.580628   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:27.262584   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:29.761866   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:27.906936   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:27.920745   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:27.920806   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:27.955399   65592 cri.go:89] found id: ""
	I1001 20:25:27.955426   65592 logs.go:276] 0 containers: []
	W1001 20:25:27.955444   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:27.955450   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:27.955503   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:27.993714   65592 cri.go:89] found id: ""
	I1001 20:25:27.993747   65592 logs.go:276] 0 containers: []
	W1001 20:25:27.993759   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:27.993766   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:27.993827   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:28.028439   65592 cri.go:89] found id: ""
	I1001 20:25:28.028475   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.028487   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:28.028494   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:28.028563   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:28.072935   65592 cri.go:89] found id: ""
	I1001 20:25:28.072966   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.072977   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:28.072985   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:28.073050   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:28.107241   65592 cri.go:89] found id: ""
	I1001 20:25:28.107275   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.107285   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:28.107293   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:28.107357   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:28.141382   65592 cri.go:89] found id: ""
	I1001 20:25:28.141412   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.141423   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:28.141431   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:28.141494   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:28.175749   65592 cri.go:89] found id: ""
	I1001 20:25:28.175782   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.175794   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:28.175801   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:28.175864   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:28.214968   65592 cri.go:89] found id: ""
	I1001 20:25:28.214997   65592 logs.go:276] 0 containers: []
	W1001 20:25:28.215006   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:28.215015   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:28.215027   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:28.259588   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:28.259619   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:28.314439   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:28.314480   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:28.327938   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:28.327967   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:28.399479   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:28.399508   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:28.399523   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:30.978863   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:30.991415   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:30.991493   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:31.026443   65592 cri.go:89] found id: ""
	I1001 20:25:31.026480   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.026494   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:31.026513   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:31.026576   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:31.060635   65592 cri.go:89] found id: ""
	I1001 20:25:31.060663   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.060678   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:31.060684   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:31.060743   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:31.095494   65592 cri.go:89] found id: ""
	I1001 20:25:31.095525   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.095533   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:31.095540   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:31.095587   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:31.130693   65592 cri.go:89] found id: ""
	I1001 20:25:31.130718   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.130728   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:31.130741   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:31.130802   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:31.167928   65592 cri.go:89] found id: ""
	I1001 20:25:31.167960   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.167973   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:31.167980   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:31.168033   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:31.202813   65592 cri.go:89] found id: ""
	I1001 20:25:31.202843   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.202855   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:31.202864   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:31.202925   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:31.240424   65592 cri.go:89] found id: ""
	I1001 20:25:31.240459   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.240468   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:31.240474   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:31.240521   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:31.275470   65592 cri.go:89] found id: ""
	I1001 20:25:31.275502   65592 logs.go:276] 0 containers: []
	W1001 20:25:31.275510   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:31.275518   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:31.275529   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:31.329604   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:31.329642   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:31.342695   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:31.342724   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:31.410169   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:31.410275   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:31.410303   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:31.489630   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:31.489677   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:28.118608   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:30.118718   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:32.119227   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:32.660640   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:35.732653   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:31.762062   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:33.764597   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:36.263251   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:34.027406   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:34.039902   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:34.039975   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:34.074992   65592 cri.go:89] found id: ""
	I1001 20:25:34.075025   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.075038   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:34.075045   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:34.075106   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:34.110264   65592 cri.go:89] found id: ""
	I1001 20:25:34.110293   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.110304   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:34.110311   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:34.110371   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:34.147097   65592 cri.go:89] found id: ""
	I1001 20:25:34.147132   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.147143   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:34.147151   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:34.147208   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:34.179453   65592 cri.go:89] found id: ""
	I1001 20:25:34.179481   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.179491   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:34.179500   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:34.179554   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:34.212407   65592 cri.go:89] found id: ""
	I1001 20:25:34.212433   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.212442   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:34.212449   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:34.212495   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:34.244400   65592 cri.go:89] found id: ""
	I1001 20:25:34.244429   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.244440   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:34.244447   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:34.244510   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:34.278423   65592 cri.go:89] found id: ""
	I1001 20:25:34.278448   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.278458   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:34.278464   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:34.278520   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:34.311019   65592 cri.go:89] found id: ""
	I1001 20:25:34.311049   65592 logs.go:276] 0 containers: []
	W1001 20:25:34.311059   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:34.311072   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:34.311083   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:34.347521   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:34.347549   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:34.400717   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:34.400754   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:34.414550   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:34.414576   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:34.486478   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:34.486503   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:34.486519   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:37.071687   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:37.084941   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:37.085025   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:37.119834   65592 cri.go:89] found id: ""
	I1001 20:25:37.119862   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.119870   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:37.119875   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:37.119984   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:37.154795   65592 cri.go:89] found id: ""
	I1001 20:25:37.154832   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.154851   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:37.154867   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:37.154927   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:37.191552   65592 cri.go:89] found id: ""
	I1001 20:25:37.191581   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.191592   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:37.191599   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:37.191670   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:34.119370   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:36.119698   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:38.761540   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:40.762894   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:37.228883   65592 cri.go:89] found id: ""
	I1001 20:25:37.228918   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.228928   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:37.228936   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:37.229000   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:37.263533   65592 cri.go:89] found id: ""
	I1001 20:25:37.263558   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.263568   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:37.263577   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:37.263638   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:37.297367   65592 cri.go:89] found id: ""
	I1001 20:25:37.297401   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.297414   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:37.297422   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:37.297486   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:37.331091   65592 cri.go:89] found id: ""
	I1001 20:25:37.331121   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.331129   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:37.331135   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:37.331202   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:37.364861   65592 cri.go:89] found id: ""
	I1001 20:25:37.364889   65592 logs.go:276] 0 containers: []
	W1001 20:25:37.364897   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:37.364905   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:37.364916   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:37.417507   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:37.417545   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:37.431613   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:37.431646   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:37.497821   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:37.497846   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:37.497861   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:37.578951   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:37.578996   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:40.121350   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:40.134553   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:40.134634   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:40.169277   65592 cri.go:89] found id: ""
	I1001 20:25:40.169313   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.169325   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:40.169333   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:40.169399   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:40.204111   65592 cri.go:89] found id: ""
	I1001 20:25:40.204144   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.204153   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:40.204159   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:40.204206   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:40.237841   65592 cri.go:89] found id: ""
	I1001 20:25:40.237872   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.237880   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:40.237886   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:40.237942   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:40.273081   65592 cri.go:89] found id: ""
	I1001 20:25:40.273108   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.273117   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:40.273123   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:40.273186   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:40.307351   65592 cri.go:89] found id: ""
	I1001 20:25:40.307384   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.307394   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:40.307399   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:40.307462   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:40.340543   65592 cri.go:89] found id: ""
	I1001 20:25:40.340569   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.340578   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:40.340584   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:40.340655   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:40.376070   65592 cri.go:89] found id: ""
	I1001 20:25:40.376112   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.376123   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:40.376130   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:40.376194   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:40.410236   65592 cri.go:89] found id: ""
	I1001 20:25:40.410267   65592 logs.go:276] 0 containers: []
	W1001 20:25:40.410279   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:40.410289   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:40.410300   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:40.463799   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:40.463835   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:40.478403   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:40.478436   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:40.547250   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:40.547279   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:40.547291   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:40.630061   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:40.630098   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:38.617891   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:40.618430   65263 pod_ready.go:103] pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:41.612771   65263 pod_ready.go:82] duration metric: took 4m0.000338317s for pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace to be "Ready" ...
	E1001 20:25:41.612803   65263 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-7jc25" in "kube-system" namespace to be "Ready" (will not retry!)
	I1001 20:25:41.612832   65263 pod_ready.go:39] duration metric: took 4m13.169141642s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:25:41.612859   65263 kubeadm.go:597] duration metric: took 4m21.203039001s to restartPrimaryControlPlane
	W1001 20:25:41.612919   65263 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1001 20:25:41.612944   65263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 20:25:41.812689   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:44.884661   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:43.264334   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:45.762034   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:43.170764   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:43.183046   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:25:43.183124   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:25:43.222995   65592 cri.go:89] found id: ""
	I1001 20:25:43.223029   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.223038   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:25:43.223044   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:25:43.223105   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:25:43.256861   65592 cri.go:89] found id: ""
	I1001 20:25:43.256891   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.256902   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:25:43.256910   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:25:43.257002   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:25:43.292643   65592 cri.go:89] found id: ""
	I1001 20:25:43.292687   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.292698   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:25:43.292704   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:25:43.292754   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:25:43.326539   65592 cri.go:89] found id: ""
	I1001 20:25:43.326568   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.326576   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:25:43.326582   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:25:43.326628   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:25:43.359787   65592 cri.go:89] found id: ""
	I1001 20:25:43.359813   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.359822   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:25:43.359828   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:25:43.359890   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:25:43.392045   65592 cri.go:89] found id: ""
	I1001 20:25:43.392076   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.392086   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:25:43.392092   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:25:43.392145   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:25:43.429498   65592 cri.go:89] found id: ""
	I1001 20:25:43.429529   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.429538   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:25:43.429544   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:25:43.429591   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:25:43.462728   65592 cri.go:89] found id: ""
	I1001 20:25:43.462760   65592 logs.go:276] 0 containers: []
	W1001 20:25:43.462771   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:25:43.462781   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:25:43.462798   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:25:43.512683   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:25:43.512717   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:25:43.527253   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:25:43.527285   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:25:43.598963   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:25:43.598989   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:25:43.599003   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:25:43.679743   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:25:43.679790   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:25:46.217101   65592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:25:46.230349   65592 kubeadm.go:597] duration metric: took 4m1.895228035s to restartPrimaryControlPlane
	W1001 20:25:46.230421   65592 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1001 20:25:46.230450   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 20:25:47.762241   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:49.763115   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:47.271291   65592 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.040818559s)
	I1001 20:25:47.271362   65592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:25:47.285083   65592 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:25:47.295774   65592 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:25:47.305487   65592 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:25:47.305511   65592 kubeadm.go:157] found existing configuration files:
	
	I1001 20:25:47.305568   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:25:47.314488   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:25:47.314573   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:25:47.323852   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:25:47.332496   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:25:47.332553   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:25:47.341236   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:25:47.349932   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:25:47.350002   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:25:47.359345   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:25:47.369180   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:25:47.369233   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:25:47.378232   65592 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:25:47.595501   65592 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:25:50.964640   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:54.036635   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:52.261890   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:54.761886   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:00.116640   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:25:57.261837   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:25:59.262445   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:01.262529   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:03.188675   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:03.762361   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:06.261749   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:07.708438   65263 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.095470945s)
	I1001 20:26:07.708514   65263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:26:07.722982   65263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:26:07.732118   65263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:26:07.741172   65263 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:26:07.741198   65263 kubeadm.go:157] found existing configuration files:
	
	I1001 20:26:07.741244   65263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:26:07.749683   65263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:26:07.749744   65263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:26:07.758875   65263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:26:07.767668   65263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:26:07.767739   65263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:26:07.776648   65263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:26:07.785930   65263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:26:07.785982   65263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:26:07.794739   65263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:26:07.803180   65263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:26:07.803241   65263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:26:07.812178   65263 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:26:07.851817   65263 kubeadm.go:310] W1001 20:26:07.836874    2546 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:26:07.852402   65263 kubeadm.go:310] W1001 20:26:07.837670    2546 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:26:09.272541   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:08.761247   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:10.761797   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:07.957551   65263 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:26:12.344653   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:16.385918   65263 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 20:26:16.385979   65263 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:26:16.386062   65263 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:26:16.386172   65263 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:26:16.386297   65263 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 20:26:16.386400   65263 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:26:16.387827   65263 out.go:235]   - Generating certificates and keys ...
	I1001 20:26:16.387909   65263 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:26:16.387989   65263 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:26:16.388104   65263 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 20:26:16.388191   65263 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 20:26:16.388284   65263 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 20:26:16.388370   65263 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 20:26:16.388464   65263 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 20:26:16.388545   65263 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 20:26:16.388646   65263 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 20:26:16.388775   65263 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 20:26:16.388824   65263 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 20:26:16.388908   65263 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:26:16.388956   65263 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:26:16.389006   65263 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 20:26:16.389048   65263 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:26:16.389117   65263 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:26:16.389201   65263 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:26:16.389333   65263 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:26:16.389444   65263 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:26:16.390823   65263 out.go:235]   - Booting up control plane ...
	I1001 20:26:16.390917   65263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:26:16.390992   65263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:26:16.391061   65263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:26:16.391161   65263 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:26:16.391285   65263 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:26:16.391335   65263 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:26:16.391468   65263 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 20:26:16.391572   65263 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 20:26:16.391628   65263 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.349149ms
	I1001 20:26:16.391686   65263 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 20:26:16.391736   65263 kubeadm.go:310] [api-check] The API server is healthy after 5.002046172s
	I1001 20:26:16.391818   65263 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 20:26:16.391923   65263 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 20:26:16.391999   65263 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 20:26:16.392169   65263 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-106982 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 20:26:16.392225   65263 kubeadm.go:310] [bootstrap-token] Using token: xlxn2k.owwnzt3amr4nx0st
	I1001 20:26:16.393437   65263 out.go:235]   - Configuring RBAC rules ...
	I1001 20:26:16.393539   65263 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 20:26:16.393609   65263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 20:26:16.393722   65263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 20:26:16.393834   65263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 20:26:16.393940   65263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 20:26:16.394017   65263 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 20:26:16.394117   65263 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 20:26:16.394154   65263 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 20:26:16.394195   65263 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 20:26:16.394200   65263 kubeadm.go:310] 
	I1001 20:26:16.394259   65263 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 20:26:16.394269   65263 kubeadm.go:310] 
	I1001 20:26:16.394335   65263 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 20:26:16.394341   65263 kubeadm.go:310] 
	I1001 20:26:16.394363   65263 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 20:26:16.394440   65263 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 20:26:16.394496   65263 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 20:26:16.394502   65263 kubeadm.go:310] 
	I1001 20:26:16.394553   65263 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 20:26:16.394559   65263 kubeadm.go:310] 
	I1001 20:26:16.394601   65263 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 20:26:16.394611   65263 kubeadm.go:310] 
	I1001 20:26:16.394656   65263 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 20:26:16.394720   65263 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 20:26:16.394804   65263 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 20:26:16.394814   65263 kubeadm.go:310] 
	I1001 20:26:16.394901   65263 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 20:26:16.394996   65263 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 20:26:16.395010   65263 kubeadm.go:310] 
	I1001 20:26:16.395128   65263 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xlxn2k.owwnzt3amr4nx0st \
	I1001 20:26:16.395262   65263 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 \
	I1001 20:26:16.395299   65263 kubeadm.go:310] 	--control-plane 
	I1001 20:26:16.395308   65263 kubeadm.go:310] 
	I1001 20:26:16.395426   65263 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 20:26:16.395436   65263 kubeadm.go:310] 
	I1001 20:26:16.395548   65263 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xlxn2k.owwnzt3amr4nx0st \
	I1001 20:26:16.395648   65263 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 
	I1001 20:26:16.395658   65263 cni.go:84] Creating CNI manager for ""
	I1001 20:26:16.395665   65263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:26:16.396852   65263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 20:26:12.763435   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:15.262381   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:16.398081   65263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 20:26:16.407920   65263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 20:26:16.428213   65263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 20:26:16.428312   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:16.428344   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-106982 minikube.k8s.io/updated_at=2024_10_01T20_26_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=embed-certs-106982 minikube.k8s.io/primary=true
	I1001 20:26:16.667876   65263 ops.go:34] apiserver oom_adj: -16
	I1001 20:26:16.667891   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:17.168194   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:17.668772   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:18.168815   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:18.668087   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:19.168767   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:19.668624   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:20.167974   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:20.668002   65263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:26:20.758486   65263 kubeadm.go:1113] duration metric: took 4.330238814s to wait for elevateKubeSystemPrivileges
	I1001 20:26:20.758520   65263 kubeadm.go:394] duration metric: took 5m0.403602376s to StartCluster
	I1001 20:26:20.758539   65263 settings.go:142] acquiring lock: {Name:mkeb3fe64ed992373f048aef24eb0d675bcab60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:26:20.758613   65263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:26:20.760430   65263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/kubeconfig: {Name:mk7ef19d546928001abf2478a3abe1d17765a591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:26:20.760678   65263 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 20:26:20.760746   65263 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 20:26:20.760852   65263 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-106982"
	I1001 20:26:20.760881   65263 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-106982"
	I1001 20:26:20.760877   65263 addons.go:69] Setting default-storageclass=true in profile "embed-certs-106982"
	W1001 20:26:20.760893   65263 addons.go:243] addon storage-provisioner should already be in state true
	I1001 20:26:20.760891   65263 addons.go:69] Setting metrics-server=true in profile "embed-certs-106982"
	I1001 20:26:20.760926   65263 host.go:66] Checking if "embed-certs-106982" exists ...
	I1001 20:26:20.760926   65263 addons.go:234] Setting addon metrics-server=true in "embed-certs-106982"
	W1001 20:26:20.761009   65263 addons.go:243] addon metrics-server should already be in state true
	I1001 20:26:20.761041   65263 host.go:66] Checking if "embed-certs-106982" exists ...
	I1001 20:26:20.760906   65263 config.go:182] Loaded profile config "embed-certs-106982": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:26:20.760902   65263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-106982"
	I1001 20:26:20.761374   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.761426   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.761429   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.761468   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.761545   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.761591   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.762861   65263 out.go:177] * Verifying Kubernetes components...
	I1001 20:26:20.764393   65263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:26:20.778448   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38619
	I1001 20:26:20.779031   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.779198   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39861
	I1001 20:26:20.779632   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.779657   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.779822   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.780085   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.780331   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.780352   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.780789   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.780829   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.781030   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.781240   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetState
	I1001 20:26:20.781260   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37397
	I1001 20:26:20.781672   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.782168   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.782189   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.782587   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.783037   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.783073   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.784573   65263 addons.go:234] Setting addon default-storageclass=true in "embed-certs-106982"
	W1001 20:26:20.784589   65263 addons.go:243] addon default-storageclass should already be in state true
	I1001 20:26:20.784609   65263 host.go:66] Checking if "embed-certs-106982" exists ...
	I1001 20:26:20.784877   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.784912   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.797787   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37113
	I1001 20:26:20.797864   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33201
	I1001 20:26:20.798261   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.798311   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.798836   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.798855   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.798931   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.798951   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.799226   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I1001 20:26:20.799230   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.799367   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.799409   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetState
	I1001 20:26:20.799515   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetState
	I1001 20:26:20.799695   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.800114   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.800130   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.800602   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.801316   65263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:26:20.801331   65263 main.go:141] libmachine: (embed-certs-106982) Calling .DriverName
	I1001 20:26:20.801351   65263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:26:20.801391   65263 main.go:141] libmachine: (embed-certs-106982) Calling .DriverName
	I1001 20:26:20.803237   65263 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1001 20:26:20.803241   65263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:26:18.420597   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:17.762869   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:20.262479   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:20.804378   65263 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1001 20:26:20.804394   65263 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1001 20:26:20.804411   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHHostname
	I1001 20:26:20.804571   65263 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:26:20.804586   65263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 20:26:20.804603   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHHostname
	I1001 20:26:20.808458   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.808866   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.808906   65263 main.go:141] libmachine: (embed-certs-106982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:ab:67", ip: ""} in network mk-embed-certs-106982: {Iface:virbr1 ExpiryTime:2024-10-01 21:21:05 +0000 UTC Type:0 Mac:52:54:00:7f:ab:67 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:embed-certs-106982 Clientid:01:52:54:00:7f:ab:67}
	I1001 20:26:20.808923   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined IP address 192.168.39.203 and MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.809183   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHPort
	I1001 20:26:20.809326   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHKeyPath
	I1001 20:26:20.809462   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHUsername
	I1001 20:26:20.809582   65263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/embed-certs-106982/id_rsa Username:docker}
	I1001 20:26:20.809917   65263 main.go:141] libmachine: (embed-certs-106982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:ab:67", ip: ""} in network mk-embed-certs-106982: {Iface:virbr1 ExpiryTime:2024-10-01 21:21:05 +0000 UTC Type:0 Mac:52:54:00:7f:ab:67 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:embed-certs-106982 Clientid:01:52:54:00:7f:ab:67}
	I1001 20:26:20.809941   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined IP address 192.168.39.203 and MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.809975   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHPort
	I1001 20:26:20.810172   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHKeyPath
	I1001 20:26:20.810320   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHUsername
	I1001 20:26:20.810498   65263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/embed-certs-106982/id_rsa Username:docker}
	I1001 20:26:20.818676   65263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33699
	I1001 20:26:20.819066   65263 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:26:20.819574   65263 main.go:141] libmachine: Using API Version  1
	I1001 20:26:20.819596   65263 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:26:20.819900   65263 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:26:20.820110   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetState
	I1001 20:26:20.821633   65263 main.go:141] libmachine: (embed-certs-106982) Calling .DriverName
	I1001 20:26:20.821820   65263 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 20:26:20.821834   65263 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 20:26:20.821852   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHHostname
	I1001 20:26:20.824684   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.825165   65263 main.go:141] libmachine: (embed-certs-106982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:ab:67", ip: ""} in network mk-embed-certs-106982: {Iface:virbr1 ExpiryTime:2024-10-01 21:21:05 +0000 UTC Type:0 Mac:52:54:00:7f:ab:67 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:embed-certs-106982 Clientid:01:52:54:00:7f:ab:67}
	I1001 20:26:20.825205   65263 main.go:141] libmachine: (embed-certs-106982) DBG | domain embed-certs-106982 has defined IP address 192.168.39.203 and MAC address 52:54:00:7f:ab:67 in network mk-embed-certs-106982
	I1001 20:26:20.825425   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHPort
	I1001 20:26:20.825577   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHKeyPath
	I1001 20:26:20.825697   65263 main.go:141] libmachine: (embed-certs-106982) Calling .GetSSHUsername
	I1001 20:26:20.825835   65263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/embed-certs-106982/id_rsa Username:docker}
	I1001 20:26:20.984756   65263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:26:21.014051   65263 node_ready.go:35] waiting up to 6m0s for node "embed-certs-106982" to be "Ready" ...
	I1001 20:26:21.023227   65263 node_ready.go:49] node "embed-certs-106982" has status "Ready":"True"
	I1001 20:26:21.023274   65263 node_ready.go:38] duration metric: took 9.170523ms for node "embed-certs-106982" to be "Ready" ...
	I1001 20:26:21.023286   65263 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:26:21.029371   65263 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:21.113480   65263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1001 20:26:21.113509   65263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1001 20:26:21.138000   65263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1001 20:26:21.138028   65263 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1001 20:26:21.162057   65263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:26:21.240772   65263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 20:26:21.251310   65263 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 20:26:21.251337   65263 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1001 20:26:21.316994   65263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 20:26:22.282775   65263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.041963655s)
	I1001 20:26:22.282809   65263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.120713974s)
	I1001 20:26:22.282835   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.282849   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.282849   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.282864   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.283226   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.283243   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.283256   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.283265   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.283244   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.283298   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.283311   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.283275   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.283278   65263 main.go:141] libmachine: (embed-certs-106982) DBG | Closing plugin on server side
	I1001 20:26:22.283808   65263 main.go:141] libmachine: (embed-certs-106982) DBG | Closing plugin on server side
	I1001 20:26:22.283808   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.283839   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.283892   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.283907   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.342382   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.342407   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.342708   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.342732   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.434882   65263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.117844425s)
	I1001 20:26:22.434937   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.434950   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.435276   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.435291   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.435301   65263 main.go:141] libmachine: Making call to close driver server
	I1001 20:26:22.435309   65263 main.go:141] libmachine: (embed-certs-106982) Calling .Close
	I1001 20:26:22.435554   65263 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:26:22.435582   65263 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:26:22.435593   65263 addons.go:475] Verifying addon metrics-server=true in "embed-certs-106982"
	I1001 20:26:22.437796   65263 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1001 20:26:22.438856   65263 addons.go:510] duration metric: took 1.678119807s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1001 20:26:21.492616   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:22.263077   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:24.761931   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:23.036676   65263 pod_ready.go:103] pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:25.537836   65263 pod_ready.go:103] pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:26.536827   65263 pod_ready.go:93] pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:26.536853   65263 pod_ready.go:82] duration metric: took 5.507455172s for pod "coredns-7c65d6cfc9-rq5ms" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:26.536865   65263 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wfdwp" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:26.541397   65263 pod_ready.go:93] pod "coredns-7c65d6cfc9-wfdwp" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:26.541427   65263 pod_ready.go:82] duration metric: took 4.554335ms for pod "coredns-7c65d6cfc9-wfdwp" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:26.541436   65263 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.048586   65263 pod_ready.go:93] pod "etcd-embed-certs-106982" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.048612   65263 pod_ready.go:82] duration metric: took 507.170207ms for pod "etcd-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.048622   65263 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.053967   65263 pod_ready.go:93] pod "kube-apiserver-embed-certs-106982" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.053994   65263 pod_ready.go:82] duration metric: took 5.365871ms for pod "kube-apiserver-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.054007   65263 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.059419   65263 pod_ready.go:93] pod "kube-controller-manager-embed-certs-106982" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.059441   65263 pod_ready.go:82] duration metric: took 5.427863ms for pod "kube-controller-manager-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.059452   65263 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fjnvc" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.333488   65263 pod_ready.go:93] pod "kube-proxy-fjnvc" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.333512   65263 pod_ready.go:82] duration metric: took 274.054021ms for pod "kube-proxy-fjnvc" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.333521   65263 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.733368   65263 pod_ready.go:93] pod "kube-scheduler-embed-certs-106982" in "kube-system" namespace has status "Ready":"True"
	I1001 20:26:27.733392   65263 pod_ready.go:82] duration metric: took 399.861423ms for pod "kube-scheduler-embed-certs-106982" in "kube-system" namespace to be "Ready" ...
	I1001 20:26:27.733400   65263 pod_ready.go:39] duration metric: took 6.710101442s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:26:27.733422   65263 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:26:27.733476   65263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:26:27.750336   65263 api_server.go:72] duration metric: took 6.989620923s to wait for apiserver process to appear ...
	I1001 20:26:27.750367   65263 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:26:27.750389   65263 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I1001 20:26:27.755350   65263 api_server.go:279] https://192.168.39.203:8443/healthz returned 200:
	ok
	I1001 20:26:27.756547   65263 api_server.go:141] control plane version: v1.31.1
	I1001 20:26:27.756572   65263 api_server.go:131] duration metric: took 6.196295ms to wait for apiserver health ...
	I1001 20:26:27.756583   65263 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:26:27.937329   65263 system_pods.go:59] 9 kube-system pods found
	I1001 20:26:27.937364   65263 system_pods.go:61] "coredns-7c65d6cfc9-rq5ms" [652fcc3d-ae12-4e11-b212-8891c1c05701] Running
	I1001 20:26:27.937373   65263 system_pods.go:61] "coredns-7c65d6cfc9-wfdwp" [1174cd48-6855-4813-9ecd-3b3a82386720] Running
	I1001 20:26:27.937380   65263 system_pods.go:61] "etcd-embed-certs-106982" [84d678ad-7322-48d0-8bab-6c683d3cf8a5] Running
	I1001 20:26:27.937386   65263 system_pods.go:61] "kube-apiserver-embed-certs-106982" [93d7fba8-306f-4b04-b65b-e3d4442f9ba6] Running
	I1001 20:26:27.937392   65263 system_pods.go:61] "kube-controller-manager-embed-certs-106982" [5e405af0-a942-4040-a955-8a007c2fc6e9] Running
	I1001 20:26:27.937396   65263 system_pods.go:61] "kube-proxy-fjnvc" [728b1b90-5961-45e9-9818-8fc6f6db1634] Running
	I1001 20:26:27.937402   65263 system_pods.go:61] "kube-scheduler-embed-certs-106982" [c0289891-9235-44de-a3cb-669648f5c18e] Running
	I1001 20:26:27.937416   65263 system_pods.go:61] "metrics-server-6867b74b74-z27sl" [dd1b4cdc-5ffb-4214-96c9-569ef4f7ba09] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:26:27.937427   65263 system_pods.go:61] "storage-provisioner" [3aaab1f2-8361-46c6-88be-ed9004628715] Running
	I1001 20:26:27.937441   65263 system_pods.go:74] duration metric: took 180.849735ms to wait for pod list to return data ...
	I1001 20:26:27.937453   65263 default_sa.go:34] waiting for default service account to be created ...
	I1001 20:26:28.133918   65263 default_sa.go:45] found service account: "default"
	I1001 20:26:28.133945   65263 default_sa.go:55] duration metric: took 196.482206ms for default service account to be created ...
	I1001 20:26:28.133955   65263 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 20:26:28.335883   65263 system_pods.go:86] 9 kube-system pods found
	I1001 20:26:28.335916   65263 system_pods.go:89] "coredns-7c65d6cfc9-rq5ms" [652fcc3d-ae12-4e11-b212-8891c1c05701] Running
	I1001 20:26:28.335923   65263 system_pods.go:89] "coredns-7c65d6cfc9-wfdwp" [1174cd48-6855-4813-9ecd-3b3a82386720] Running
	I1001 20:26:28.335927   65263 system_pods.go:89] "etcd-embed-certs-106982" [84d678ad-7322-48d0-8bab-6c683d3cf8a5] Running
	I1001 20:26:28.335931   65263 system_pods.go:89] "kube-apiserver-embed-certs-106982" [93d7fba8-306f-4b04-b65b-e3d4442f9ba6] Running
	I1001 20:26:28.335935   65263 system_pods.go:89] "kube-controller-manager-embed-certs-106982" [5e405af0-a942-4040-a955-8a007c2fc6e9] Running
	I1001 20:26:28.335939   65263 system_pods.go:89] "kube-proxy-fjnvc" [728b1b90-5961-45e9-9818-8fc6f6db1634] Running
	I1001 20:26:28.335942   65263 system_pods.go:89] "kube-scheduler-embed-certs-106982" [c0289891-9235-44de-a3cb-669648f5c18e] Running
	I1001 20:26:28.335947   65263 system_pods.go:89] "metrics-server-6867b74b74-z27sl" [dd1b4cdc-5ffb-4214-96c9-569ef4f7ba09] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:26:28.335951   65263 system_pods.go:89] "storage-provisioner" [3aaab1f2-8361-46c6-88be-ed9004628715] Running
	I1001 20:26:28.335959   65263 system_pods.go:126] duration metric: took 202.000148ms to wait for k8s-apps to be running ...
	I1001 20:26:28.335967   65263 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 20:26:28.336013   65263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:26:28.350578   65263 system_svc.go:56] duration metric: took 14.603568ms WaitForService to wait for kubelet
	I1001 20:26:28.350608   65263 kubeadm.go:582] duration metric: took 7.589898283s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:26:28.350630   65263 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:26:28.533508   65263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:26:28.533533   65263 node_conditions.go:123] node cpu capacity is 2
	I1001 20:26:28.533544   65263 node_conditions.go:105] duration metric: took 182.908473ms to run NodePressure ...
	I1001 20:26:28.533554   65263 start.go:241] waiting for startup goroutines ...
	I1001 20:26:28.533561   65263 start.go:246] waiting for cluster config update ...
	I1001 20:26:28.533571   65263 start.go:255] writing updated cluster config ...
	I1001 20:26:28.533862   65263 ssh_runner.go:195] Run: rm -f paused
	I1001 20:26:28.580991   65263 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 20:26:28.583612   65263 out.go:177] * Done! kubectl is now configured to use "embed-certs-106982" cluster and "default" namespace by default
	I1001 20:26:27.572585   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:30.648588   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:27.262297   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:29.761795   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:31.762340   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:34.261713   64676 pod_ready.go:103] pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace has status "Ready":"False"
	I1001 20:26:35.263742   64676 pod_ready.go:82] duration metric: took 4m0.008218565s for pod "metrics-server-6867b74b74-2rpwt" in "kube-system" namespace to be "Ready" ...
	E1001 20:26:35.263766   64676 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1001 20:26:35.263774   64676 pod_ready.go:39] duration metric: took 4m6.044360969s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:26:35.263791   64676 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:26:35.263820   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:26:35.263879   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:26:35.314427   64676 cri.go:89] found id: "a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:35.314450   64676 cri.go:89] found id: ""
	I1001 20:26:35.314457   64676 logs.go:276] 1 containers: [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd]
	I1001 20:26:35.314510   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.319554   64676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:26:35.319627   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:26:35.352986   64676 cri.go:89] found id: "586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:35.353006   64676 cri.go:89] found id: ""
	I1001 20:26:35.353013   64676 logs.go:276] 1 containers: [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f]
	I1001 20:26:35.353061   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.356979   64676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:26:35.357044   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:26:35.397175   64676 cri.go:89] found id: "4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:35.397196   64676 cri.go:89] found id: ""
	I1001 20:26:35.397203   64676 logs.go:276] 1 containers: [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6]
	I1001 20:26:35.397250   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.401025   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:26:35.401108   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:26:35.434312   64676 cri.go:89] found id: "89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:35.434333   64676 cri.go:89] found id: ""
	I1001 20:26:35.434340   64676 logs.go:276] 1 containers: [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d]
	I1001 20:26:35.434400   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.438325   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:26:35.438385   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:26:35.480711   64676 cri.go:89] found id: "fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:35.480738   64676 cri.go:89] found id: ""
	I1001 20:26:35.480750   64676 logs.go:276] 1 containers: [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8]
	I1001 20:26:35.480795   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.484996   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:26:35.485073   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:26:35.524876   64676 cri.go:89] found id: "69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:35.524909   64676 cri.go:89] found id: ""
	I1001 20:26:35.524920   64676 logs.go:276] 1 containers: [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf]
	I1001 20:26:35.524984   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.529297   64676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:26:35.529366   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:26:35.564110   64676 cri.go:89] found id: ""
	I1001 20:26:35.564138   64676 logs.go:276] 0 containers: []
	W1001 20:26:35.564149   64676 logs.go:278] No container was found matching "kindnet"
	I1001 20:26:35.564157   64676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1001 20:26:35.564222   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1001 20:26:35.599279   64676 cri.go:89] found id: "a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:35.599311   64676 cri.go:89] found id: "652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:35.599318   64676 cri.go:89] found id: ""
	I1001 20:26:35.599327   64676 logs.go:276] 2 containers: [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b]
	I1001 20:26:35.599379   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.603377   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:35.607668   64676 logs.go:123] Gathering logs for kubelet ...
	I1001 20:26:35.607698   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:26:35.678017   64676 logs.go:123] Gathering logs for coredns [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6] ...
	I1001 20:26:35.678053   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:35.717814   64676 logs.go:123] Gathering logs for kube-proxy [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8] ...
	I1001 20:26:35.717842   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:35.752647   64676 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:26:35.752680   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:26:36.259582   64676 logs.go:123] Gathering logs for container status ...
	I1001 20:26:36.259630   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:26:36.299857   64676 logs.go:123] Gathering logs for storage-provisioner [652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b] ...
	I1001 20:26:36.299892   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:36.339923   64676 logs.go:123] Gathering logs for dmesg ...
	I1001 20:26:36.339973   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:26:36.353728   64676 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:26:36.353763   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 20:26:36.728608   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:39.796591   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:36.482029   64676 logs.go:123] Gathering logs for kube-apiserver [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd] ...
	I1001 20:26:36.482071   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:36.525705   64676 logs.go:123] Gathering logs for etcd [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f] ...
	I1001 20:26:36.525741   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:36.566494   64676 logs.go:123] Gathering logs for kube-scheduler [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d] ...
	I1001 20:26:36.566529   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:36.602489   64676 logs.go:123] Gathering logs for kube-controller-manager [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf] ...
	I1001 20:26:36.602523   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:36.666726   64676 logs.go:123] Gathering logs for storage-provisioner [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa] ...
	I1001 20:26:36.666757   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:39.203217   64676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:26:39.220220   64676 api_server.go:72] duration metric: took 4m17.274155342s to wait for apiserver process to appear ...
	I1001 20:26:39.220253   64676 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:26:39.220301   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:26:39.220372   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:26:39.261710   64676 cri.go:89] found id: "a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:39.261739   64676 cri.go:89] found id: ""
	I1001 20:26:39.261749   64676 logs.go:276] 1 containers: [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd]
	I1001 20:26:39.261804   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.265994   64676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:26:39.266057   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:26:39.298615   64676 cri.go:89] found id: "586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:39.298642   64676 cri.go:89] found id: ""
	I1001 20:26:39.298650   64676 logs.go:276] 1 containers: [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f]
	I1001 20:26:39.298694   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.302584   64676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:26:39.302647   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:26:39.338062   64676 cri.go:89] found id: "4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:39.338091   64676 cri.go:89] found id: ""
	I1001 20:26:39.338102   64676 logs.go:276] 1 containers: [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6]
	I1001 20:26:39.338157   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.342553   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:26:39.342613   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:26:39.379787   64676 cri.go:89] found id: "89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:39.379818   64676 cri.go:89] found id: ""
	I1001 20:26:39.379828   64676 logs.go:276] 1 containers: [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d]
	I1001 20:26:39.379885   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.384397   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:26:39.384454   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:26:39.419175   64676 cri.go:89] found id: "fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:39.419204   64676 cri.go:89] found id: ""
	I1001 20:26:39.419215   64676 logs.go:276] 1 containers: [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8]
	I1001 20:26:39.419275   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.423113   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:26:39.423184   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:26:39.455948   64676 cri.go:89] found id: "69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:39.455974   64676 cri.go:89] found id: ""
	I1001 20:26:39.455984   64676 logs.go:276] 1 containers: [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf]
	I1001 20:26:39.456040   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.459912   64676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:26:39.459978   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:26:39.504152   64676 cri.go:89] found id: ""
	I1001 20:26:39.504179   64676 logs.go:276] 0 containers: []
	W1001 20:26:39.504187   64676 logs.go:278] No container was found matching "kindnet"
	I1001 20:26:39.504192   64676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1001 20:26:39.504241   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1001 20:26:39.538918   64676 cri.go:89] found id: "a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:39.538940   64676 cri.go:89] found id: "652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:39.538947   64676 cri.go:89] found id: ""
	I1001 20:26:39.538957   64676 logs.go:276] 2 containers: [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b]
	I1001 20:26:39.539013   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.542832   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:39.546365   64676 logs.go:123] Gathering logs for storage-provisioner [652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b] ...
	I1001 20:26:39.546395   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:39.589286   64676 logs.go:123] Gathering logs for kubelet ...
	I1001 20:26:39.589320   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:26:39.657412   64676 logs.go:123] Gathering logs for dmesg ...
	I1001 20:26:39.657447   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:26:39.671553   64676 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:26:39.671581   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 20:26:39.786194   64676 logs.go:123] Gathering logs for kube-apiserver [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd] ...
	I1001 20:26:39.786226   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:39.829798   64676 logs.go:123] Gathering logs for coredns [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6] ...
	I1001 20:26:39.829831   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:39.865854   64676 logs.go:123] Gathering logs for kube-controller-manager [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf] ...
	I1001 20:26:39.865890   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:39.920702   64676 logs.go:123] Gathering logs for storage-provisioner [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa] ...
	I1001 20:26:39.920735   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:39.959343   64676 logs.go:123] Gathering logs for etcd [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f] ...
	I1001 20:26:39.959375   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:40.001320   64676 logs.go:123] Gathering logs for kube-scheduler [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d] ...
	I1001 20:26:40.001354   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:40.037182   64676 logs.go:123] Gathering logs for kube-proxy [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8] ...
	I1001 20:26:40.037214   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:40.070072   64676 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:26:40.070098   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:26:40.492733   64676 logs.go:123] Gathering logs for container status ...
	I1001 20:26:40.492770   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:26:43.042801   64676 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I1001 20:26:43.048223   64676 api_server.go:279] https://192.168.61.93:8443/healthz returned 200:
	ok
	I1001 20:26:43.049199   64676 api_server.go:141] control plane version: v1.31.1
	I1001 20:26:43.049229   64676 api_server.go:131] duration metric: took 3.828968104s to wait for apiserver health ...
	I1001 20:26:43.049239   64676 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:26:43.049267   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:26:43.049331   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:26:43.087098   64676 cri.go:89] found id: "a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:43.087132   64676 cri.go:89] found id: ""
	I1001 20:26:43.087144   64676 logs.go:276] 1 containers: [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd]
	I1001 20:26:43.087206   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.091606   64676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:26:43.091665   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:26:43.127154   64676 cri.go:89] found id: "586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:43.127177   64676 cri.go:89] found id: ""
	I1001 20:26:43.127184   64676 logs.go:276] 1 containers: [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f]
	I1001 20:26:43.127227   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.131246   64676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:26:43.131320   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:26:43.165473   64676 cri.go:89] found id: "4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:43.165503   64676 cri.go:89] found id: ""
	I1001 20:26:43.165514   64676 logs.go:276] 1 containers: [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6]
	I1001 20:26:43.165577   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.169908   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:26:43.169982   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:26:43.210196   64676 cri.go:89] found id: "89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:43.210225   64676 cri.go:89] found id: ""
	I1001 20:26:43.210235   64676 logs.go:276] 1 containers: [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d]
	I1001 20:26:43.210302   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.214253   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:26:43.214317   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:26:43.249533   64676 cri.go:89] found id: "fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:43.249555   64676 cri.go:89] found id: ""
	I1001 20:26:43.249563   64676 logs.go:276] 1 containers: [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8]
	I1001 20:26:43.249625   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.253555   64676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:26:43.253633   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:26:43.294711   64676 cri.go:89] found id: "69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:43.294734   64676 cri.go:89] found id: ""
	I1001 20:26:43.294742   64676 logs.go:276] 1 containers: [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf]
	I1001 20:26:43.294787   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.298960   64676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:26:43.299037   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:26:43.339542   64676 cri.go:89] found id: ""
	I1001 20:26:43.339572   64676 logs.go:276] 0 containers: []
	W1001 20:26:43.339582   64676 logs.go:278] No container was found matching "kindnet"
	I1001 20:26:43.339588   64676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1001 20:26:43.339667   64676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1001 20:26:43.382206   64676 cri.go:89] found id: "a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:43.382230   64676 cri.go:89] found id: "652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:43.382234   64676 cri.go:89] found id: ""
	I1001 20:26:43.382241   64676 logs.go:276] 2 containers: [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b]
	I1001 20:26:43.382289   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.386473   64676 ssh_runner.go:195] Run: which crictl
	I1001 20:26:43.390146   64676 logs.go:123] Gathering logs for kubelet ...
	I1001 20:26:43.390172   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:26:43.457659   64676 logs.go:123] Gathering logs for dmesg ...
	I1001 20:26:43.457699   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:26:43.471078   64676 logs.go:123] Gathering logs for kube-apiserver [a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd] ...
	I1001 20:26:43.471109   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a64415a2dee8bad683a7c4491bbc76fb2f3638393fc4e31632b361477183addd"
	I1001 20:26:43.518058   64676 logs.go:123] Gathering logs for etcd [586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f] ...
	I1001 20:26:43.518093   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 586d6feee0436fb02e07d38c370aa68e1992b4b947db1fee74a670ec56d8e33f"
	I1001 20:26:43.559757   64676 logs.go:123] Gathering logs for kube-proxy [fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8] ...
	I1001 20:26:43.559788   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc3552d19417ae2baee4c80f30a7ecdcb1bc7fc43d1c52803c40cb2387034cc8"
	I1001 20:26:43.595485   64676 logs.go:123] Gathering logs for storage-provisioner [652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b] ...
	I1001 20:26:43.595513   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 652cab583d763b35b4b2b55b12740b0d16644903ed264b7a87eb89b80e4cf09b"
	I1001 20:26:43.628167   64676 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:26:43.628195   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 20:26:43.741206   64676 logs.go:123] Gathering logs for coredns [4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6] ...
	I1001 20:26:43.741234   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4380c36f31b673287f8caa3ac3494e7d415b45abcc49635136f8f00fcd36fed6"
	I1001 20:26:43.777220   64676 logs.go:123] Gathering logs for kube-scheduler [89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d] ...
	I1001 20:26:43.777248   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89f0e3dd97e8a1ab08e1f5180fbc62142205c26506094900d4e5c98458d8450d"
	I1001 20:26:43.817507   64676 logs.go:123] Gathering logs for kube-controller-manager [69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf] ...
	I1001 20:26:43.817536   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69adf90addf5f7677743ee2c4cc08630c7b2b2857ec9655c811d8f0ae05520cf"
	I1001 20:26:43.880127   64676 logs.go:123] Gathering logs for storage-provisioner [a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa] ...
	I1001 20:26:43.880161   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5ae72bcebfe460188f453a7a8623f93694e4ed488e8d0e801385afa47e1bfaa"
	I1001 20:26:43.915172   64676 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:26:43.915199   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:26:44.289237   64676 logs.go:123] Gathering logs for container status ...
	I1001 20:26:44.289277   64676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:26:46.835363   64676 system_pods.go:59] 8 kube-system pods found
	I1001 20:26:46.835393   64676 system_pods.go:61] "coredns-7c65d6cfc9-g8jf8" [7fbddef1-a564-4ee8-ab53-ae838d0fd984] Running
	I1001 20:26:46.835398   64676 system_pods.go:61] "etcd-no-preload-262337" [086d7949-d20d-49d8-871d-a464de60e4cb] Running
	I1001 20:26:46.835402   64676 system_pods.go:61] "kube-apiserver-no-preload-262337" [d8473136-4e07-43e2-bd20-65232e2d5102] Running
	I1001 20:26:46.835405   64676 system_pods.go:61] "kube-controller-manager-no-preload-262337" [63c7d071-20cd-48c5-b410-b78e339b0731] Running
	I1001 20:26:46.835408   64676 system_pods.go:61] "kube-proxy-7rrkn" [e25a055c-0203-4fe7-8801-560b9cdb27bb] Running
	I1001 20:26:46.835412   64676 system_pods.go:61] "kube-scheduler-no-preload-262337" [3b962e64-eea6-4c24-a230-32c40106a4dd] Running
	I1001 20:26:46.835418   64676 system_pods.go:61] "metrics-server-6867b74b74-2rpwt" [235515ab-28fc-437b-983a-243f7a8fb183] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:26:46.835422   64676 system_pods.go:61] "storage-provisioner" [8832193a-39b4-49b9-b943-3241bb27fb8d] Running
	I1001 20:26:46.835431   64676 system_pods.go:74] duration metric: took 3.786183909s to wait for pod list to return data ...
	I1001 20:26:46.835441   64676 default_sa.go:34] waiting for default service account to be created ...
	I1001 20:26:46.838345   64676 default_sa.go:45] found service account: "default"
	I1001 20:26:46.838367   64676 default_sa.go:55] duration metric: took 2.918089ms for default service account to be created ...
	I1001 20:26:46.838375   64676 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 20:26:46.844822   64676 system_pods.go:86] 8 kube-system pods found
	I1001 20:26:46.844850   64676 system_pods.go:89] "coredns-7c65d6cfc9-g8jf8" [7fbddef1-a564-4ee8-ab53-ae838d0fd984] Running
	I1001 20:26:46.844856   64676 system_pods.go:89] "etcd-no-preload-262337" [086d7949-d20d-49d8-871d-a464de60e4cb] Running
	I1001 20:26:46.844860   64676 system_pods.go:89] "kube-apiserver-no-preload-262337" [d8473136-4e07-43e2-bd20-65232e2d5102] Running
	I1001 20:26:46.844863   64676 system_pods.go:89] "kube-controller-manager-no-preload-262337" [63c7d071-20cd-48c5-b410-b78e339b0731] Running
	I1001 20:26:46.844867   64676 system_pods.go:89] "kube-proxy-7rrkn" [e25a055c-0203-4fe7-8801-560b9cdb27bb] Running
	I1001 20:26:46.844870   64676 system_pods.go:89] "kube-scheduler-no-preload-262337" [3b962e64-eea6-4c24-a230-32c40106a4dd] Running
	I1001 20:26:46.844876   64676 system_pods.go:89] "metrics-server-6867b74b74-2rpwt" [235515ab-28fc-437b-983a-243f7a8fb183] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:26:46.844881   64676 system_pods.go:89] "storage-provisioner" [8832193a-39b4-49b9-b943-3241bb27fb8d] Running
	I1001 20:26:46.844889   64676 system_pods.go:126] duration metric: took 6.508902ms to wait for k8s-apps to be running ...
	I1001 20:26:46.844895   64676 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 20:26:46.844934   64676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:26:46.861543   64676 system_svc.go:56] duration metric: took 16.63712ms WaitForService to wait for kubelet
	I1001 20:26:46.861586   64676 kubeadm.go:582] duration metric: took 4m24.915538002s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:26:46.861614   64676 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:26:46.864599   64676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:26:46.864632   64676 node_conditions.go:123] node cpu capacity is 2
	I1001 20:26:46.864644   64676 node_conditions.go:105] duration metric: took 3.023838ms to run NodePressure ...
	I1001 20:26:46.864657   64676 start.go:241] waiting for startup goroutines ...
	I1001 20:26:46.864667   64676 start.go:246] waiting for cluster config update ...
	I1001 20:26:46.864682   64676 start.go:255] writing updated cluster config ...
	I1001 20:26:46.864960   64676 ssh_runner.go:195] Run: rm -f paused
	I1001 20:26:46.924982   64676 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 20:26:46.926817   64676 out.go:177] * Done! kubectl is now configured to use "no-preload-262337" cluster and "default" namespace by default
	I1001 20:26:45.880599   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:48.948631   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:55.028660   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:26:58.100570   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:04.180661   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:07.252656   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:13.332644   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:16.404640   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:22.484714   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:25.556606   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:31.636609   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:34.712725   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:40.788632   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:43.940129   65592 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1001 20:27:43.940232   65592 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1001 20:27:43.942002   65592 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1001 20:27:43.942068   65592 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:27:43.942170   65592 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:27:43.942281   65592 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:27:43.942421   65592 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 20:27:43.942518   65592 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:27:43.944271   65592 out.go:235]   - Generating certificates and keys ...
	I1001 20:27:43.944389   65592 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:27:43.944486   65592 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:27:43.944600   65592 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 20:27:43.944693   65592 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 20:27:43.944797   65592 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 20:27:43.944888   65592 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 20:27:43.944985   65592 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 20:27:43.945072   65592 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 20:27:43.945190   65592 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 20:27:43.945301   65592 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 20:27:43.945361   65592 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 20:27:43.945420   65592 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:27:43.945467   65592 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:27:43.945515   65592 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:27:43.945585   65592 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:27:43.945651   65592 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:27:43.945772   65592 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:27:43.945899   65592 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:27:43.945961   65592 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:27:43.946057   65592 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:27:43.860704   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:43.947517   65592 out.go:235]   - Booting up control plane ...
	I1001 20:27:43.947644   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:27:43.947767   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:27:43.947861   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:27:43.947978   65592 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:27:43.948185   65592 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 20:27:43.948258   65592 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1001 20:27:43.948396   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.948618   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.948695   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.948930   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.948991   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.949149   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.949232   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.949380   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.949439   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:27:43.949597   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:27:43.949616   65592 kubeadm.go:310] 
	I1001 20:27:43.949658   65592 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1001 20:27:43.949693   65592 kubeadm.go:310] 		timed out waiting for the condition
	I1001 20:27:43.949704   65592 kubeadm.go:310] 
	I1001 20:27:43.949737   65592 kubeadm.go:310] 	This error is likely caused by:
	I1001 20:27:43.949766   65592 kubeadm.go:310] 		- The kubelet is not running
	I1001 20:27:43.949863   65592 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1001 20:27:43.949871   65592 kubeadm.go:310] 
	I1001 20:27:43.949968   65592 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1001 20:27:43.950000   65592 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1001 20:27:43.950034   65592 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1001 20:27:43.950040   65592 kubeadm.go:310] 
	I1001 20:27:43.950136   65592 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1001 20:27:43.950207   65592 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1001 20:27:43.950213   65592 kubeadm.go:310] 
	I1001 20:27:43.950310   65592 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1001 20:27:43.950389   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1001 20:27:43.950454   65592 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1001 20:27:43.950533   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1001 20:27:43.950566   65592 kubeadm.go:310] 
	W1001 20:27:43.950665   65592 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1001 20:27:43.950707   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 20:27:44.404995   65592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:27:44.421130   65592 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:27:44.431204   65592 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:27:44.431228   65592 kubeadm.go:157] found existing configuration files:
	
	I1001 20:27:44.431270   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:27:44.440792   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:27:44.440857   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:27:44.450469   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:27:44.459640   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:27:44.459695   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:27:44.469335   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:27:44.478848   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:27:44.478904   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:27:44.489162   65592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:27:44.501070   65592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:27:44.501157   65592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:27:44.511970   65592 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:27:44.728685   65592 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:27:49.940611   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:53.016657   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:27:59.092700   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:02.164611   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:08.244707   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:11.316686   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:17.400607   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:20.468660   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:26.548687   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:29.624608   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:35.700638   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:38.772693   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:44.852721   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:47.924690   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:54.004674   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:28:57.080644   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:03.156750   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:06.232700   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:12.308749   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:15.380633   68418 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.4:22: connect: no route to host
	I1001 20:29:18.381649   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:29:18.381689   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetMachineName
	I1001 20:29:18.382037   68418 buildroot.go:166] provisioning hostname "default-k8s-diff-port-878552"
	I1001 20:29:18.382063   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetMachineName
	I1001 20:29:18.382291   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:18.384714   68418 machine.go:96] duration metric: took 4m37.419094583s to provisionDockerMachine
	I1001 20:29:18.384772   68418 fix.go:56] duration metric: took 4m37.442164125s for fixHost
	I1001 20:29:18.384782   68418 start.go:83] releasing machines lock for "default-k8s-diff-port-878552", held for 4m37.442187455s
	W1001 20:29:18.384813   68418 start.go:714] error starting host: provision: host is not running
	W1001 20:29:18.384993   68418 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1001 20:29:18.385017   68418 start.go:729] Will try again in 5 seconds ...
	I1001 20:29:23.387086   68418 start.go:360] acquireMachinesLock for default-k8s-diff-port-878552: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 20:29:23.387232   68418 start.go:364] duration metric: took 101.596µs to acquireMachinesLock for "default-k8s-diff-port-878552"
	I1001 20:29:23.387273   68418 start.go:96] Skipping create...Using existing machine configuration
	I1001 20:29:23.387284   68418 fix.go:54] fixHost starting: 
	I1001 20:29:23.387645   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:29:23.387669   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:29:23.403371   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39149
	I1001 20:29:23.404008   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:29:23.404580   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:29:23.404603   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:29:23.405181   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:29:23.405410   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:23.405560   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:29:23.407563   68418 fix.go:112] recreateIfNeeded on default-k8s-diff-port-878552: state=Stopped err=<nil>
	I1001 20:29:23.407589   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	W1001 20:29:23.407771   68418 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 20:29:23.409721   68418 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-878552" ...
	I1001 20:29:23.410973   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Start
	I1001 20:29:23.411207   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Ensuring networks are active...
	I1001 20:29:23.412117   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Ensuring network default is active
	I1001 20:29:23.412576   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Ensuring network mk-default-k8s-diff-port-878552 is active
	I1001 20:29:23.412956   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Getting domain xml...
	I1001 20:29:23.413589   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Creating domain...
	I1001 20:29:24.744972   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting to get IP...
	I1001 20:29:24.746001   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:24.746641   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:24.746710   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:24.746607   69521 retry.go:31] will retry after 260.966833ms: waiting for machine to come up
	I1001 20:29:25.009284   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.009825   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.009849   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:25.009778   69521 retry.go:31] will retry after 308.10041ms: waiting for machine to come up
	I1001 20:29:25.319153   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.319717   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.319752   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:25.319652   69521 retry.go:31] will retry after 342.802984ms: waiting for machine to come up
	I1001 20:29:25.664405   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.664893   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:25.664920   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:25.664816   69521 retry.go:31] will retry after 397.002924ms: waiting for machine to come up
	I1001 20:29:26.063628   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:26.064235   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:26.064259   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:26.064201   69521 retry.go:31] will retry after 526.648832ms: waiting for machine to come up
	I1001 20:29:26.592834   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:26.593284   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:26.593307   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:26.593226   69521 retry.go:31] will retry after 642.569388ms: waiting for machine to come up
	I1001 20:29:27.237224   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:27.237775   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:27.237808   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:27.237714   69521 retry.go:31] will retry after 963.05932ms: waiting for machine to come up
	I1001 20:29:28.202841   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:28.203333   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:28.203363   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:28.203287   69521 retry.go:31] will retry after 1.372004234s: waiting for machine to come up
	I1001 20:29:29.577175   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:29.577678   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:29.577706   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:29.577627   69521 retry.go:31] will retry after 1.693508507s: waiting for machine to come up
	I1001 20:29:31.273758   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:31.274247   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:31.274274   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:31.274201   69521 retry.go:31] will retry after 1.793304779s: waiting for machine to come up
	I1001 20:29:33.069467   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:33.069894   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:33.069915   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:33.069861   69521 retry.go:31] will retry after 2.825253867s: waiting for machine to come up
	I1001 20:29:40.678676   65592 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1001 20:29:40.678797   65592 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1001 20:29:40.680563   65592 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1001 20:29:40.680613   65592 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:29:40.680680   65592 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:29:40.680788   65592 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:29:40.680868   65592 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 20:29:40.681030   65592 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:29:40.683042   65592 out.go:235]   - Generating certificates and keys ...
	I1001 20:29:40.683149   65592 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:29:40.683245   65592 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:29:40.683353   65592 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 20:29:40.683435   65592 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 20:29:40.683545   65592 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 20:29:40.683605   65592 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 20:29:40.683665   65592 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 20:29:40.683723   65592 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 20:29:40.683793   65592 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 20:29:40.683878   65592 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 20:29:40.683956   65592 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 20:29:40.684054   65592 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:29:40.684127   65592 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:29:40.684212   65592 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:29:40.684303   65592 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:29:40.684414   65592 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:29:40.684551   65592 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:29:40.684661   65592 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:29:40.684724   65592 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:29:40.684827   65592 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:29:35.897417   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:35.897916   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:35.897949   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:35.897862   69521 retry.go:31] will retry after 3.519866937s: waiting for machine to come up
	I1001 20:29:39.419142   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:39.419528   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | unable to find current IP address of domain default-k8s-diff-port-878552 in network mk-default-k8s-diff-port-878552
	I1001 20:29:39.419554   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | I1001 20:29:39.419494   69521 retry.go:31] will retry after 3.507101438s: waiting for machine to come up
	I1001 20:29:40.686427   65592 out.go:235]   - Booting up control plane ...
	I1001 20:29:40.686534   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:29:40.686621   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:29:40.686710   65592 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:29:40.686820   65592 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:29:40.686996   65592 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 20:29:40.687063   65592 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1001 20:29:40.687127   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.687336   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.687443   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.687674   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.687759   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.687958   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.688047   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.688212   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.688274   65592 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 20:29:40.688510   65592 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 20:29:40.688519   65592 kubeadm.go:310] 
	I1001 20:29:40.688566   65592 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1001 20:29:40.688610   65592 kubeadm.go:310] 		timed out waiting for the condition
	I1001 20:29:40.688617   65592 kubeadm.go:310] 
	I1001 20:29:40.688646   65592 kubeadm.go:310] 	This error is likely caused by:
	I1001 20:29:40.688680   65592 kubeadm.go:310] 		- The kubelet is not running
	I1001 20:29:40.688770   65592 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1001 20:29:40.688778   65592 kubeadm.go:310] 
	I1001 20:29:40.688882   65592 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1001 20:29:40.688937   65592 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1001 20:29:40.688986   65592 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1001 20:29:40.688996   65592 kubeadm.go:310] 
	I1001 20:29:40.689114   65592 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1001 20:29:40.689222   65592 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1001 20:29:40.689237   65592 kubeadm.go:310] 
	I1001 20:29:40.689376   65592 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1001 20:29:40.689517   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1001 20:29:40.689638   65592 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1001 20:29:40.689709   65592 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1001 20:29:40.689786   65592 kubeadm.go:310] 
	I1001 20:29:40.689796   65592 kubeadm.go:394] duration metric: took 7m56.416911577s to StartCluster
	I1001 20:29:40.689838   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:29:40.689896   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:29:40.733027   65592 cri.go:89] found id: ""
	I1001 20:29:40.733059   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.733068   65592 logs.go:278] No container was found matching "kube-apiserver"
	I1001 20:29:40.733073   65592 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 20:29:40.733120   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:29:40.767975   65592 cri.go:89] found id: ""
	I1001 20:29:40.768010   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.768021   65592 logs.go:278] No container was found matching "etcd"
	I1001 20:29:40.768029   65592 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 20:29:40.768095   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:29:40.802624   65592 cri.go:89] found id: ""
	I1001 20:29:40.802657   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.802668   65592 logs.go:278] No container was found matching "coredns"
	I1001 20:29:40.802676   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:29:40.802748   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:29:40.838109   65592 cri.go:89] found id: ""
	I1001 20:29:40.838142   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.838151   65592 logs.go:278] No container was found matching "kube-scheduler"
	I1001 20:29:40.838157   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:29:40.838204   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:29:40.873083   65592 cri.go:89] found id: ""
	I1001 20:29:40.873112   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.873124   65592 logs.go:278] No container was found matching "kube-proxy"
	I1001 20:29:40.873131   65592 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:29:40.873192   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:29:40.907675   65592 cri.go:89] found id: ""
	I1001 20:29:40.907705   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.907714   65592 logs.go:278] No container was found matching "kube-controller-manager"
	I1001 20:29:40.907720   65592 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 20:29:40.907775   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:29:40.941641   65592 cri.go:89] found id: ""
	I1001 20:29:40.941669   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.941678   65592 logs.go:278] No container was found matching "kindnet"
	I1001 20:29:40.941691   65592 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:29:40.941748   65592 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:29:40.978189   65592 cri.go:89] found id: ""
	I1001 20:29:40.978216   65592 logs.go:276] 0 containers: []
	W1001 20:29:40.978227   65592 logs.go:278] No container was found matching "kubernetes-dashboard"
	I1001 20:29:40.978238   65592 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:29:40.978254   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 20:29:41.053798   65592 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 20:29:41.053823   65592 logs.go:123] Gathering logs for CRI-O ...
	I1001 20:29:41.053835   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 20:29:41.160669   65592 logs.go:123] Gathering logs for container status ...
	I1001 20:29:41.160715   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:29:41.218152   65592 logs.go:123] Gathering logs for kubelet ...
	I1001 20:29:41.218182   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 20:29:41.274784   65592 logs.go:123] Gathering logs for dmesg ...
	I1001 20:29:41.274821   65592 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1001 20:29:41.288554   65592 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1001 20:29:41.288613   65592 out.go:270] * 
	W1001 20:29:41.288663   65592 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1001 20:29:41.288674   65592 out.go:270] * 
	W1001 20:29:41.289525   65592 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 20:29:41.292969   65592 out.go:201] 
	W1001 20:29:41.294238   65592 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1001 20:29:41.294278   65592 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1001 20:29:41.294297   65592 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1001 20:29:41.295783   65592 out.go:201] 
	I1001 20:29:42.929490   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:42.930036   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has current primary IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:42.930058   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Found IP for machine: 192.168.50.4
	I1001 20:29:42.930091   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Reserving static IP address...
	I1001 20:29:42.930623   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-878552", mac: "52:54:00:72:13:05", ip: "192.168.50.4"} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:42.930660   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | skip adding static IP to network mk-default-k8s-diff-port-878552 - found existing host DHCP lease matching {name: "default-k8s-diff-port-878552", mac: "52:54:00:72:13:05", ip: "192.168.50.4"}
	I1001 20:29:42.930686   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Reserved static IP address: 192.168.50.4
	I1001 20:29:42.930703   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Waiting for SSH to be available...
	I1001 20:29:42.930719   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Getting to WaitForSSH function...
	I1001 20:29:42.933472   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:42.933911   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:42.933948   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:42.934106   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Using SSH client type: external
	I1001 20:29:42.934134   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa (-rw-------)
	I1001 20:29:42.934168   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.4 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 20:29:42.934190   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | About to run SSH command:
	I1001 20:29:42.934210   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | exit 0
	I1001 20:29:43.064425   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | SSH cmd err, output: <nil>: 
	I1001 20:29:43.064821   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetConfigRaw
	I1001 20:29:43.065476   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetIP
	I1001 20:29:43.068442   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.068951   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.068982   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.069236   68418 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/config.json ...
	I1001 20:29:43.069476   68418 machine.go:93] provisionDockerMachine start ...
	I1001 20:29:43.069498   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:43.069726   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.072374   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.072720   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.072754   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.072974   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:43.073170   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.073358   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.073501   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:43.073685   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:29:43.073919   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:29:43.073946   68418 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 20:29:43.188588   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1001 20:29:43.188626   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetMachineName
	I1001 20:29:43.188887   68418 buildroot.go:166] provisioning hostname "default-k8s-diff-port-878552"
	I1001 20:29:43.188948   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetMachineName
	I1001 20:29:43.189182   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.192158   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.192550   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.192575   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.192743   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:43.192918   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.193081   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.193193   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:43.193317   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:29:43.193466   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:29:43.193478   68418 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-878552 && echo "default-k8s-diff-port-878552" | sudo tee /etc/hostname
	I1001 20:29:43.318342   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-878552
	
	I1001 20:29:43.318381   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.321205   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.321777   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.321807   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.322031   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:43.322218   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.322360   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.322515   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:43.322729   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:29:43.322907   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:29:43.322925   68418 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-878552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-878552/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-878552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 20:29:43.440839   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:29:43.440884   68418 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 20:29:43.440949   68418 buildroot.go:174] setting up certificates
	I1001 20:29:43.440966   68418 provision.go:84] configureAuth start
	I1001 20:29:43.440982   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetMachineName
	I1001 20:29:43.441238   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetIP
	I1001 20:29:43.443849   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.444223   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.444257   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.444432   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.446569   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.447004   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.447032   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.447130   68418 provision.go:143] copyHostCerts
	I1001 20:29:43.447210   68418 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 20:29:43.447224   68418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 20:29:43.447317   68418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 20:29:43.447430   68418 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 20:29:43.447442   68418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 20:29:43.447484   68418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 20:29:43.447560   68418 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 20:29:43.447570   68418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 20:29:43.447602   68418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 20:29:43.447729   68418 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-878552 san=[127.0.0.1 192.168.50.4 default-k8s-diff-port-878552 localhost minikube]
	I1001 20:29:43.597134   68418 provision.go:177] copyRemoteCerts
	I1001 20:29:43.597195   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 20:29:43.597216   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.599988   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.600379   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.600414   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.600598   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:43.600799   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.600970   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:43.601115   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:29:43.687211   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 20:29:43.714280   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1001 20:29:43.738536   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 20:29:43.764130   68418 provision.go:87] duration metric: took 323.147928ms to configureAuth
	I1001 20:29:43.764163   68418 buildroot.go:189] setting minikube options for container-runtime
	I1001 20:29:43.764353   68418 config.go:182] Loaded profile config "default-k8s-diff-port-878552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:29:43.764462   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:43.767588   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.767962   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:43.767991   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:43.768181   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:43.768339   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.768525   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:43.768665   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:43.768833   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:29:43.768994   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:29:43.769013   68418 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 20:29:43.998941   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 20:29:43.998964   68418 machine.go:96] duration metric: took 929.475626ms to provisionDockerMachine
	I1001 20:29:43.998976   68418 start.go:293] postStartSetup for "default-k8s-diff-port-878552" (driver="kvm2")
	I1001 20:29:43.998989   68418 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 20:29:43.999008   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:43.999305   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 20:29:43.999332   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:44.001854   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.002381   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:44.002433   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.002555   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:44.002787   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:44.002967   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:44.003142   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:29:44.091378   68418 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 20:29:44.096207   68418 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 20:29:44.096235   68418 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 20:29:44.096315   68418 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 20:29:44.096424   68418 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 20:29:44.096531   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 20:29:44.106232   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:29:44.130524   68418 start.go:296] duration metric: took 131.532724ms for postStartSetup
	I1001 20:29:44.130564   68418 fix.go:56] duration metric: took 20.743280839s for fixHost
	I1001 20:29:44.130589   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:44.133873   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.134285   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:44.134309   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.134509   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:44.134719   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:44.134873   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:44.135025   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:44.135172   68418 main.go:141] libmachine: Using SSH client type: native
	I1001 20:29:44.135362   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1001 20:29:44.135376   68418 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 20:29:44.249136   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727814584.207146331
	
	I1001 20:29:44.249160   68418 fix.go:216] guest clock: 1727814584.207146331
	I1001 20:29:44.249189   68418 fix.go:229] Guest: 2024-10-01 20:29:44.207146331 +0000 UTC Remote: 2024-10-01 20:29:44.13056925 +0000 UTC m=+303.335525185 (delta=76.577081ms)
	I1001 20:29:44.249215   68418 fix.go:200] guest clock delta is within tolerance: 76.577081ms
	I1001 20:29:44.249220   68418 start.go:83] releasing machines lock for "default-k8s-diff-port-878552", held for 20.861972701s
	I1001 20:29:44.249238   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:44.249527   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetIP
	I1001 20:29:44.252984   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.253526   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:44.253569   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.253903   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:44.254449   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:44.254618   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:29:44.254680   68418 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 20:29:44.254727   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:44.254810   68418 ssh_runner.go:195] Run: cat /version.json
	I1001 20:29:44.254833   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:29:44.257550   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.257826   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.258077   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:44.258114   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.258363   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:44.258489   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:44.258529   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:44.258563   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:44.258683   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:29:44.258784   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:44.258832   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:29:44.258915   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:29:44.258965   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:29:44.259113   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:29:44.379049   68418 ssh_runner.go:195] Run: systemctl --version
	I1001 20:29:44.384985   68418 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 20:29:44.527579   68418 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 20:29:44.533267   68418 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 20:29:44.533357   68418 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 20:29:44.552308   68418 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 20:29:44.552333   68418 start.go:495] detecting cgroup driver to use...
	I1001 20:29:44.552421   68418 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 20:29:44.573762   68418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 20:29:44.588010   68418 docker.go:217] disabling cri-docker service (if available) ...
	I1001 20:29:44.588063   68418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 20:29:44.602369   68418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 20:29:44.618754   68418 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 20:29:44.757380   68418 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 20:29:44.941718   68418 docker.go:233] disabling docker service ...
	I1001 20:29:44.941790   68418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 20:29:44.957306   68418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 20:29:44.971723   68418 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 20:29:45.094124   68418 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 20:29:45.220645   68418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 20:29:45.236217   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 20:29:45.255752   68418 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 20:29:45.255820   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.266327   68418 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 20:29:45.266398   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.276964   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.288013   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.298669   68418 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 20:29:45.309693   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.320041   68418 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.336621   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:29:45.346862   68418 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 20:29:45.357656   68418 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 20:29:45.357717   68418 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 20:29:45.372693   68418 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 20:29:45.383796   68418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:29:45.524957   68418 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 20:29:45.611630   68418 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 20:29:45.611702   68418 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 20:29:45.616520   68418 start.go:563] Will wait 60s for crictl version
	I1001 20:29:45.616587   68418 ssh_runner.go:195] Run: which crictl
	I1001 20:29:45.620321   68418 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 20:29:45.661806   68418 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 20:29:45.661890   68418 ssh_runner.go:195] Run: crio --version
	I1001 20:29:45.690843   68418 ssh_runner.go:195] Run: crio --version
	I1001 20:29:45.720183   68418 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 20:29:45.721659   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetIP
	I1001 20:29:45.724986   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:45.725349   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:29:45.725376   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:29:45.725583   68418 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1001 20:29:45.729522   68418 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:29:45.741877   68418 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-878552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:default-k8s-diff-port-878552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.4 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 20:29:45.742008   68418 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 20:29:45.742051   68418 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:29:45.779002   68418 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1001 20:29:45.779081   68418 ssh_runner.go:195] Run: which lz4
	I1001 20:29:45.782751   68418 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 20:29:45.786704   68418 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 20:29:45.786733   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1001 20:29:47.072431   68418 crio.go:462] duration metric: took 1.289701438s to copy over tarball
	I1001 20:29:47.072508   68418 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 20:29:49.166576   68418 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.094040254s)
	I1001 20:29:49.166604   68418 crio.go:469] duration metric: took 2.094143226s to extract the tarball
	I1001 20:29:49.166613   68418 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 20:29:49.203988   68418 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:29:49.250464   68418 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 20:29:49.250490   68418 cache_images.go:84] Images are preloaded, skipping loading
	I1001 20:29:49.250499   68418 kubeadm.go:934] updating node { 192.168.50.4 8444 v1.31.1 crio true true} ...
	I1001 20:29:49.250612   68418 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-878552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-878552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 20:29:49.250697   68418 ssh_runner.go:195] Run: crio config
	I1001 20:29:49.298003   68418 cni.go:84] Creating CNI manager for ""
	I1001 20:29:49.298024   68418 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:29:49.298032   68418 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 20:29:49.298055   68418 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.4 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-878552 NodeName:default-k8s-diff-port-878552 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 20:29:49.298183   68418 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.4
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-878552"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 20:29:49.298253   68418 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 20:29:49.308945   68418 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 20:29:49.309011   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 20:29:49.319017   68418 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1001 20:29:49.335588   68418 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 20:29:49.351598   68418 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I1001 20:29:49.369172   68418 ssh_runner.go:195] Run: grep 192.168.50.4	control-plane.minikube.internal$ /etc/hosts
	I1001 20:29:49.372755   68418 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.4	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:29:49.385529   68418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:29:49.509676   68418 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:29:49.527149   68418 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552 for IP: 192.168.50.4
	I1001 20:29:49.527170   68418 certs.go:194] generating shared ca certs ...
	I1001 20:29:49.527185   68418 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:29:49.527321   68418 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 20:29:49.527368   68418 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 20:29:49.527378   68418 certs.go:256] generating profile certs ...
	I1001 20:29:49.527456   68418 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/client.key
	I1001 20:29:49.527514   68418 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/apiserver.key.7bbee9b6
	I1001 20:29:49.527555   68418 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/proxy-client.key
	I1001 20:29:49.527668   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 20:29:49.527707   68418 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 20:29:49.527735   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 20:29:49.527772   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 20:29:49.527811   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 20:29:49.527848   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 20:29:49.527907   68418 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:29:49.529210   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 20:29:49.577743   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 20:29:49.617960   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 20:29:49.659543   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 20:29:49.709464   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1001 20:29:49.734308   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 20:29:49.759576   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 20:29:49.784416   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/default-k8s-diff-port-878552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 20:29:49.809150   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 20:29:49.833580   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 20:29:49.857628   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 20:29:49.880924   68418 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 20:29:49.897478   68418 ssh_runner.go:195] Run: openssl version
	I1001 20:29:49.903488   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 20:29:49.914490   68418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:29:49.919105   68418 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:29:49.919165   68418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:29:49.925133   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 20:29:49.936294   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 20:29:49.946630   68418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 20:29:49.951255   68418 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 20:29:49.951308   68418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 20:29:49.957277   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 20:29:49.971166   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 20:29:49.982558   68418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 20:29:49.986947   68418 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 20:29:49.987003   68418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 20:29:49.992569   68418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 20:29:50.002922   68418 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 20:29:50.007707   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 20:29:50.013717   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 20:29:50.020166   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 20:29:50.026795   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 20:29:50.033544   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 20:29:50.039686   68418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 20:29:50.045837   68418 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-878552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:default-k8s-diff-port-878552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.4 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:29:50.045971   68418 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 20:29:50.046025   68418 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 20:29:50.086925   68418 cri.go:89] found id: ""
	I1001 20:29:50.086999   68418 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 20:29:50.097130   68418 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1001 20:29:50.097167   68418 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1001 20:29:50.097222   68418 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1001 20:29:50.108298   68418 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1001 20:29:50.109389   68418 kubeconfig.go:125] found "default-k8s-diff-port-878552" server: "https://192.168.50.4:8444"
	I1001 20:29:50.111587   68418 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1001 20:29:50.122158   68418 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.4
	I1001 20:29:50.122199   68418 kubeadm.go:1160] stopping kube-system containers ...
	I1001 20:29:50.122213   68418 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1001 20:29:50.122281   68418 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 20:29:50.160351   68418 cri.go:89] found id: ""
	I1001 20:29:50.160434   68418 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1001 20:29:50.178857   68418 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:29:50.190857   68418 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:29:50.190879   68418 kubeadm.go:157] found existing configuration files:
	
	I1001 20:29:50.190926   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1001 20:29:50.200391   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:29:50.200449   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:29:50.210388   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1001 20:29:50.219943   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:29:50.220007   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:29:50.229576   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1001 20:29:50.239983   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:29:50.240055   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:29:50.251062   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1001 20:29:50.261349   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:29:50.261430   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:29:50.271284   68418 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:29:50.281256   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:50.393255   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:51.469349   68418 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.076029092s)
	I1001 20:29:51.469386   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:51.683522   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:51.749545   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:51.856549   68418 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:29:51.856662   68418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:29:52.356980   68418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:29:52.857568   68418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:29:53.357123   68418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:29:53.372308   68418 api_server.go:72] duration metric: took 1.515757915s to wait for apiserver process to appear ...
	I1001 20:29:53.372341   68418 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:29:53.372387   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:53.372877   68418 api_server.go:269] stopped: https://192.168.50.4:8444/healthz: Get "https://192.168.50.4:8444/healthz": dial tcp 192.168.50.4:8444: connect: connection refused
	I1001 20:29:53.872447   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:56.591087   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1001 20:29:56.591111   68418 api_server.go:103] status: https://192.168.50.4:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1001 20:29:56.591122   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:56.668641   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1001 20:29:56.668672   68418 api_server.go:103] status: https://192.168.50.4:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1001 20:29:56.872906   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:56.882393   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 20:29:56.882433   68418 api_server.go:103] status: https://192.168.50.4:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 20:29:57.372590   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:57.377715   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 20:29:57.377745   68418 api_server.go:103] status: https://192.168.50.4:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 20:29:57.873466   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:29:57.879628   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 200:
	ok
	I1001 20:29:57.889478   68418 api_server.go:141] control plane version: v1.31.1
	I1001 20:29:57.889512   68418 api_server.go:131] duration metric: took 4.517163838s to wait for apiserver health ...
	I1001 20:29:57.889520   68418 cni.go:84] Creating CNI manager for ""
	I1001 20:29:57.889534   68418 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:29:57.891485   68418 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 20:29:57.892936   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 20:29:57.910485   68418 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 20:29:57.930071   68418 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:29:57.940155   68418 system_pods.go:59] 8 kube-system pods found
	I1001 20:29:57.940191   68418 system_pods.go:61] "coredns-7c65d6cfc9-cmchv" [55a0612c-d596-4799-a9f9-0b6d9361ca15] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 20:29:57.940202   68418 system_pods.go:61] "etcd-default-k8s-diff-port-878552" [bcd7c228-d83d-4eec-9a64-f33dac086dcd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1001 20:29:57.940211   68418 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-878552" [23602015-b245-4e14-a076-2e9efb0f2f66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 20:29:57.940232   68418 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-878552" [e94298d4-75e3-4fbb-b361-6e5248273355] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1001 20:29:57.940239   68418 system_pods.go:61] "kube-proxy-sxxfj" [2bd75205-221e-498e-8a80-1e7a727fd4e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1001 20:29:57.940246   68418 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-878552" [ddcacd2c-3781-46df-83f8-e6763485a55d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 20:29:57.940254   68418 system_pods.go:61] "metrics-server-6867b74b74-b62f8" [26359941-b4d3-442c-ae46-4303a2f7b5e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:29:57.940262   68418 system_pods.go:61] "storage-provisioner" [a34592b0-f9e5-465b-9d64-07cf84f0c951] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1001 20:29:57.940279   68418 system_pods.go:74] duration metric: took 10.189531ms to wait for pod list to return data ...
	I1001 20:29:57.940292   68418 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:29:57.945316   68418 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:29:57.945349   68418 node_conditions.go:123] node cpu capacity is 2
	I1001 20:29:57.945362   68418 node_conditions.go:105] duration metric: took 5.063896ms to run NodePressure ...
	I1001 20:29:57.945380   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 20:29:58.233781   68418 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1001 20:29:58.237692   68418 kubeadm.go:739] kubelet initialised
	I1001 20:29:58.237713   68418 kubeadm.go:740] duration metric: took 3.903724ms waiting for restarted kubelet to initialise ...
	I1001 20:29:58.237721   68418 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:29:58.243500   68418 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:00.249577   68418 pod_ready.go:103] pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:02.250329   68418 pod_ready.go:103] pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:04.750635   68418 pod_ready.go:103] pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:06.751559   68418 pod_ready.go:93] pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:06.751583   68418 pod_ready.go:82] duration metric: took 8.508053751s for pod "coredns-7c65d6cfc9-cmchv" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:06.751594   68418 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:08.757727   68418 pod_ready.go:103] pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:10.260326   68418 pod_ready.go:93] pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:10.260352   68418 pod_ready.go:82] duration metric: took 3.508751351s for pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.260388   68418 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.267041   68418 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:10.267071   68418 pod_ready.go:82] duration metric: took 6.67429ms for pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.267083   68418 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.773135   68418 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:10.773156   68418 pod_ready.go:82] duration metric: took 506.065053ms for pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.773166   68418 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-sxxfj" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.777890   68418 pod_ready.go:93] pod "kube-proxy-sxxfj" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:10.777910   68418 pod_ready.go:82] duration metric: took 4.738315ms for pod "kube-proxy-sxxfj" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.777918   68418 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.782610   68418 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:30:10.782634   68418 pod_ready.go:82] duration metric: took 4.708989ms for pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:10.782644   68418 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace to be "Ready" ...
	I1001 20:30:12.789050   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:15.290635   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:17.290867   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:19.789502   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:21.789999   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:24.289487   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:26.789083   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:28.789955   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:30.790439   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:33.289188   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:35.289313   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:37.289903   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:39.788459   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:41.788633   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:43.788867   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:46.290002   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:48.789891   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:51.289334   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:53.788643   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:55.789983   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:30:58.288949   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:00.289478   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:02.290789   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:04.789722   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:07.289474   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:09.290183   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:11.790355   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:14.289284   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:16.289536   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:18.289606   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:20.789261   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:22.789463   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:25.290185   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:27.788643   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:29.788778   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:31.790285   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:34.288230   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:36.288784   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:38.289862   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:40.789317   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:43.289232   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:45.290400   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:47.788723   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:49.789327   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:52.289114   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:54.788895   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:56.788984   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:31:59.288473   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:01.789415   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:04.289328   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:06.289615   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:08.788879   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:10.790191   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:13.288885   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:15.789008   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:17.789191   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:19.789559   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:22.288958   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:24.290206   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:26.788241   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:28.789457   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:31.288929   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:33.789418   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:35.789932   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:38.288742   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:40.289667   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:42.789129   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:44.790115   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:47.289310   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:49.289558   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:51.789255   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:54.289586   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:56.788032   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:32:58.789012   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:01.289206   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:03.788129   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:05.788915   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:07.790124   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:10.289206   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:12.789314   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:14.789636   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:17.288443   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:19.289524   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:21.289650   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:23.789802   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:26.289735   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:28.788897   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:30.789339   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:33.289295   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:35.289664   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:37.789968   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:40.289657   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:42.789430   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:45.289320   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:47.789980   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:50.287836   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:52.289028   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:54.788936   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:56.789521   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:33:59.289778   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:01.788398   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:03.789045   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:05.789391   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:08.289322   68418 pod_ready.go:103] pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:10.783748   68418 pod_ready.go:82] duration metric: took 4m0.001085136s for pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace to be "Ready" ...
	E1001 20:34:10.783784   68418 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-b62f8" in "kube-system" namespace to be "Ready" (will not retry!)
	I1001 20:34:10.783805   68418 pod_ready.go:39] duration metric: took 4m12.546072786s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:34:10.783831   68418 kubeadm.go:597] duration metric: took 4m20.686657254s to restartPrimaryControlPlane
	W1001 20:34:10.783895   68418 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1001 20:34:10.783926   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 20:34:36.981542   68418 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.197594945s)
	I1001 20:34:36.981628   68418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:34:37.005650   68418 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:34:37.017406   68418 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:34:37.031711   68418 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:34:37.031737   68418 kubeadm.go:157] found existing configuration files:
	
	I1001 20:34:37.031801   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1001 20:34:37.054028   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:34:37.054096   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:34:37.068277   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1001 20:34:37.099472   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:34:37.099558   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:34:37.109813   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1001 20:34:37.119548   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:34:37.119620   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:34:37.129522   68418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1001 20:34:37.138911   68418 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:34:37.138971   68418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:34:37.149119   68418 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:34:37.193177   68418 kubeadm.go:310] W1001 20:34:37.161028    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:34:37.193935   68418 kubeadm.go:310] W1001 20:34:37.161888    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:34:37.305111   68418 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:34:45.582383   68418 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 20:34:45.582463   68418 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:34:45.582540   68418 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:34:45.582643   68418 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:34:45.582725   68418 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 20:34:45.582825   68418 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:34:45.584304   68418 out.go:235]   - Generating certificates and keys ...
	I1001 20:34:45.584409   68418 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:34:45.584488   68418 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:34:45.584584   68418 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 20:34:45.584666   68418 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 20:34:45.584757   68418 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 20:34:45.584833   68418 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 20:34:45.584926   68418 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 20:34:45.585014   68418 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 20:34:45.585109   68418 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 20:34:45.585227   68418 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 20:34:45.585291   68418 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 20:34:45.585364   68418 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:34:45.585438   68418 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:34:45.585527   68418 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 20:34:45.585609   68418 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:34:45.585710   68418 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:34:45.585792   68418 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:34:45.585901   68418 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:34:45.585990   68418 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:34:45.587360   68418 out.go:235]   - Booting up control plane ...
	I1001 20:34:45.587448   68418 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:34:45.587539   68418 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:34:45.587626   68418 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:34:45.587751   68418 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:34:45.587885   68418 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:34:45.587960   68418 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:34:45.588118   68418 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 20:34:45.588256   68418 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 20:34:45.588341   68418 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002411615s
	I1001 20:34:45.588453   68418 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 20:34:45.588531   68418 kubeadm.go:310] [api-check] The API server is healthy after 5.002438287s
	I1001 20:34:45.588653   68418 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 20:34:45.588821   68418 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 20:34:45.588925   68418 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 20:34:45.589184   68418 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-878552 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 20:34:45.589272   68418 kubeadm.go:310] [bootstrap-token] Using token: p1d60n.4sgx895mi22cjpsf
	I1001 20:34:45.590444   68418 out.go:235]   - Configuring RBAC rules ...
	I1001 20:34:45.590599   68418 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 20:34:45.590726   68418 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 20:34:45.590923   68418 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 20:34:45.591071   68418 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 20:34:45.591222   68418 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 20:34:45.591301   68418 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 20:34:45.591402   68418 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 20:34:45.591441   68418 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 20:34:45.591485   68418 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 20:34:45.591492   68418 kubeadm.go:310] 
	I1001 20:34:45.591540   68418 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 20:34:45.591548   68418 kubeadm.go:310] 
	I1001 20:34:45.591614   68418 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 20:34:45.591619   68418 kubeadm.go:310] 
	I1001 20:34:45.591644   68418 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 20:34:45.591694   68418 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 20:34:45.591750   68418 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 20:34:45.591756   68418 kubeadm.go:310] 
	I1001 20:34:45.591812   68418 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 20:34:45.591818   68418 kubeadm.go:310] 
	I1001 20:34:45.591857   68418 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 20:34:45.591865   68418 kubeadm.go:310] 
	I1001 20:34:45.591909   68418 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 20:34:45.591990   68418 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 20:34:45.592063   68418 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 20:34:45.592071   68418 kubeadm.go:310] 
	I1001 20:34:45.592195   68418 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 20:34:45.592313   68418 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 20:34:45.592322   68418 kubeadm.go:310] 
	I1001 20:34:45.592432   68418 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token p1d60n.4sgx895mi22cjpsf \
	I1001 20:34:45.592579   68418 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 \
	I1001 20:34:45.592611   68418 kubeadm.go:310] 	--control-plane 
	I1001 20:34:45.592620   68418 kubeadm.go:310] 
	I1001 20:34:45.592734   68418 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 20:34:45.592743   68418 kubeadm.go:310] 
	I1001 20:34:45.592858   68418 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token p1d60n.4sgx895mi22cjpsf \
	I1001 20:34:45.592982   68418 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 
	I1001 20:34:45.592997   68418 cni.go:84] Creating CNI manager for ""
	I1001 20:34:45.593009   68418 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 20:34:45.594419   68418 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 20:34:45.595548   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 20:34:45.607351   68418 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 20:34:45.627315   68418 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 20:34:45.627399   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:45.627424   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-878552 minikube.k8s.io/updated_at=2024_10_01T20_34_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=default-k8s-diff-port-878552 minikube.k8s.io/primary=true
	I1001 20:34:45.843925   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:45.843977   68418 ops.go:34] apiserver oom_adj: -16
	I1001 20:34:46.344009   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:46.844786   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:47.344138   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:47.844582   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:48.344478   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:48.844802   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:49.344790   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:49.844113   68418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:34:49.980078   68418 kubeadm.go:1113] duration metric: took 4.352743528s to wait for elevateKubeSystemPrivileges
	I1001 20:34:49.980127   68418 kubeadm.go:394] duration metric: took 4m59.934297539s to StartCluster
	I1001 20:34:49.980151   68418 settings.go:142] acquiring lock: {Name:mkeb3fe64ed992373f048aef24eb0d675bcab60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:34:49.980237   68418 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:34:49.982156   68418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/kubeconfig: {Name:mk7ef19d546928001abf2478a3abe1d17765a591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:34:49.982450   68418 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.4 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 20:34:49.982531   68418 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 20:34:49.982651   68418 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-878552"
	I1001 20:34:49.982674   68418 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-878552"
	I1001 20:34:49.982673   68418 config.go:182] Loaded profile config "default-k8s-diff-port-878552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W1001 20:34:49.982682   68418 addons.go:243] addon storage-provisioner should already be in state true
	I1001 20:34:49.982722   68418 host.go:66] Checking if "default-k8s-diff-port-878552" exists ...
	I1001 20:34:49.982727   68418 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-878552"
	I1001 20:34:49.982743   68418 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-878552"
	I1001 20:34:49.982817   68418 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-878552"
	I1001 20:34:49.982861   68418 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-878552"
	W1001 20:34:49.982871   68418 addons.go:243] addon metrics-server should already be in state true
	I1001 20:34:49.982899   68418 host.go:66] Checking if "default-k8s-diff-port-878552" exists ...
	I1001 20:34:49.983158   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:49.983157   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:49.983202   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:49.983222   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:49.983301   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:49.983360   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:49.983825   68418 out.go:177] * Verifying Kubernetes components...
	I1001 20:34:49.985618   68418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:34:50.000925   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33641
	I1001 20:34:50.001031   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40311
	I1001 20:34:50.001469   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.001518   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.002031   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.002046   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.002084   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.002096   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.002510   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.002698   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.003148   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:50.003188   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:50.003432   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33083
	I1001 20:34:50.003813   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:50.003845   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.003858   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:50.004438   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.004462   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.004823   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.005017   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:34:50.009397   68418 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-878552"
	W1001 20:34:50.009420   68418 addons.go:243] addon default-storageclass should already be in state true
	I1001 20:34:50.009449   68418 host.go:66] Checking if "default-k8s-diff-port-878552" exists ...
	I1001 20:34:50.009886   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:50.009937   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:50.025234   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42543
	I1001 20:34:50.025892   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.026556   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.026583   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.027217   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.027484   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:34:50.029351   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42743
	I1001 20:34:50.029576   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:34:50.029996   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.030498   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.030520   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.030634   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45211
	I1001 20:34:50.030843   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.031078   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:34:50.031171   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.031283   68418 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:34:50.031683   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.031706   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.032061   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.032524   68418 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:34:50.032542   68418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 20:34:50.032560   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:34:50.032650   68418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:34:50.032683   68418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:34:50.033489   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:34:50.034928   68418 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1001 20:34:50.036629   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.036714   68418 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1001 20:34:50.036728   68418 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1001 20:34:50.036757   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:34:50.037000   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:34:50.037020   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.037303   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:34:50.037502   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:34:50.037697   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:34:50.037858   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:34:50.040023   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.040406   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:34:50.040428   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.040637   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:34:50.040843   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:34:50.041031   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:34:50.041156   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:34:50.050069   68418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34785
	I1001 20:34:50.050601   68418 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:34:50.051079   68418 main.go:141] libmachine: Using API Version  1
	I1001 20:34:50.051098   68418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:34:50.051460   68418 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:34:50.051601   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetState
	I1001 20:34:50.054072   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .DriverName
	I1001 20:34:50.054308   68418 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 20:34:50.054324   68418 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 20:34:50.054344   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHHostname
	I1001 20:34:50.057697   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.058329   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:13:05", ip: ""} in network mk-default-k8s-diff-port-878552: {Iface:virbr2 ExpiryTime:2024-10-01 21:29:34 +0000 UTC Type:0 Mac:52:54:00:72:13:05 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:default-k8s-diff-port-878552 Clientid:01:52:54:00:72:13:05}
	I1001 20:34:50.058386   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | domain default-k8s-diff-port-878552 has defined IP address 192.168.50.4 and MAC address 52:54:00:72:13:05 in network mk-default-k8s-diff-port-878552
	I1001 20:34:50.058519   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHPort
	I1001 20:34:50.058781   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHKeyPath
	I1001 20:34:50.059047   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .GetSSHUsername
	I1001 20:34:50.059192   68418 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/default-k8s-diff-port-878552/id_rsa Username:docker}
	I1001 20:34:50.228332   68418 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:34:50.245991   68418 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-878552" to be "Ready" ...
	I1001 20:34:50.255784   68418 node_ready.go:49] node "default-k8s-diff-port-878552" has status "Ready":"True"
	I1001 20:34:50.255822   68418 node_ready.go:38] duration metric: took 9.789404ms for node "default-k8s-diff-port-878552" to be "Ready" ...
	I1001 20:34:50.255836   68418 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:34:50.262258   68418 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8xth8" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:50.409170   68418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 20:34:50.412846   68418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:34:50.423375   68418 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1001 20:34:50.423404   68418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1001 20:34:50.476160   68418 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1001 20:34:50.476192   68418 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1001 20:34:50.510810   68418 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 20:34:50.510840   68418 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1001 20:34:50.570025   68418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 20:34:50.783367   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:50.783390   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:50.783748   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Closing plugin on server side
	I1001 20:34:50.783761   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:50.783773   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:50.783786   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:50.783794   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:50.783980   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:50.783993   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:50.783999   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Closing plugin on server side
	I1001 20:34:50.795782   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:50.795802   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:50.796093   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:50.796114   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:51.424974   68418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.012087585s)
	I1001 20:34:51.425090   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:51.425107   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:51.425376   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:51.425413   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:51.425426   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:51.425440   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:51.425671   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Closing plugin on server side
	I1001 20:34:51.425723   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:51.425743   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:51.713898   68418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.143834875s)
	I1001 20:34:51.713954   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:51.713969   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:51.714336   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:51.714375   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:51.714380   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) DBG | Closing plugin on server side
	I1001 20:34:51.714385   68418 main.go:141] libmachine: Making call to close driver server
	I1001 20:34:51.714487   68418 main.go:141] libmachine: (default-k8s-diff-port-878552) Calling .Close
	I1001 20:34:51.714762   68418 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:34:51.714779   68418 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:34:51.714798   68418 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-878552"
	I1001 20:34:51.716414   68418 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1001 20:34:51.717866   68418 addons.go:510] duration metric: took 1.735348103s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1001 20:34:52.268955   68418 pod_ready.go:103] pod "coredns-7c65d6cfc9-8xth8" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:54.769610   68418 pod_ready.go:93] pod "coredns-7c65d6cfc9-8xth8" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:54.769633   68418 pod_ready.go:82] duration metric: took 4.507339793s for pod "coredns-7c65d6cfc9-8xth8" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:54.769642   68418 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p7wbg" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:56.775610   68418 pod_ready.go:103] pod "coredns-7c65d6cfc9-p7wbg" in "kube-system" namespace has status "Ready":"False"
	I1001 20:34:57.777422   68418 pod_ready.go:93] pod "coredns-7c65d6cfc9-p7wbg" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:57.777445   68418 pod_ready.go:82] duration metric: took 3.007796462s for pod "coredns-7c65d6cfc9-p7wbg" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.777455   68418 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.783103   68418 pod_ready.go:93] pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:57.783124   68418 pod_ready.go:82] duration metric: took 5.664052ms for pod "etcd-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.783135   68418 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.788028   68418 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:57.788052   68418 pod_ready.go:82] duration metric: took 4.910566ms for pod "kube-apiserver-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.788064   68418 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.792321   68418 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:57.792348   68418 pod_ready.go:82] duration metric: took 4.274793ms for pod "kube-controller-manager-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.792379   68418 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-272ln" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.797759   68418 pod_ready.go:93] pod "kube-proxy-272ln" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:57.797782   68418 pod_ready.go:82] duration metric: took 5.395909ms for pod "kube-proxy-272ln" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:57.797792   68418 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:58.173750   68418 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace has status "Ready":"True"
	I1001 20:34:58.173783   68418 pod_ready.go:82] duration metric: took 375.98387ms for pod "kube-scheduler-default-k8s-diff-port-878552" in "kube-system" namespace to be "Ready" ...
	I1001 20:34:58.173793   68418 pod_ready.go:39] duration metric: took 7.917945016s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:34:58.173812   68418 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:34:58.173878   68418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:34:58.188649   68418 api_server.go:72] duration metric: took 8.206165908s to wait for apiserver process to appear ...
	I1001 20:34:58.188676   68418 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:34:58.188697   68418 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8444/healthz ...
	I1001 20:34:58.193752   68418 api_server.go:279] https://192.168.50.4:8444/healthz returned 200:
	ok
	I1001 20:34:58.194629   68418 api_server.go:141] control plane version: v1.31.1
	I1001 20:34:58.194646   68418 api_server.go:131] duration metric: took 5.963942ms to wait for apiserver health ...
	I1001 20:34:58.194653   68418 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:34:58.378081   68418 system_pods.go:59] 9 kube-system pods found
	I1001 20:34:58.378110   68418 system_pods.go:61] "coredns-7c65d6cfc9-8xth8" [4a6d614d-f16c-46fb-add5-610ac5895e1c] Running
	I1001 20:34:58.378115   68418 system_pods.go:61] "coredns-7c65d6cfc9-p7wbg" [13fab587-7dc4-41fc-a74c-47372725886d] Running
	I1001 20:34:58.378121   68418 system_pods.go:61] "etcd-default-k8s-diff-port-878552" [56a25509-d233-470d-888a-cf87475bf51b] Running
	I1001 20:34:58.378124   68418 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-878552" [d74bbc5a-6944-4e7b-a175-59b8ce58b359] Running
	I1001 20:34:58.378128   68418 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-878552" [5f2b8294-3146-4996-8a92-69ae08803d55] Running
	I1001 20:34:58.378131   68418 system_pods.go:61] "kube-proxy-272ln" [9f2e367f-34c7-4117-bd8e-62b5aa58c7b5] Running
	I1001 20:34:58.378134   68418 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-878552" [91e886e5-8452-4fe2-8be8-7705eeed5073] Running
	I1001 20:34:58.378140   68418 system_pods.go:61] "metrics-server-6867b74b74-75m4s" [c8eb4eb3-ea7f-4a88-8c1f-e3331f44ae53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:34:58.378143   68418 system_pods.go:61] "storage-provisioner" [bfc9ed28-f04b-4e57-b8c0-f41849e1fc25] Running
	I1001 20:34:58.378151   68418 system_pods.go:74] duration metric: took 183.491966ms to wait for pod list to return data ...
	I1001 20:34:58.378157   68418 default_sa.go:34] waiting for default service account to be created ...
	I1001 20:34:58.574257   68418 default_sa.go:45] found service account: "default"
	I1001 20:34:58.574282   68418 default_sa.go:55] duration metric: took 196.119399ms for default service account to be created ...
	I1001 20:34:58.574290   68418 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 20:34:58.776341   68418 system_pods.go:86] 9 kube-system pods found
	I1001 20:34:58.776395   68418 system_pods.go:89] "coredns-7c65d6cfc9-8xth8" [4a6d614d-f16c-46fb-add5-610ac5895e1c] Running
	I1001 20:34:58.776406   68418 system_pods.go:89] "coredns-7c65d6cfc9-p7wbg" [13fab587-7dc4-41fc-a74c-47372725886d] Running
	I1001 20:34:58.776420   68418 system_pods.go:89] "etcd-default-k8s-diff-port-878552" [56a25509-d233-470d-888a-cf87475bf51b] Running
	I1001 20:34:58.776428   68418 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-878552" [d74bbc5a-6944-4e7b-a175-59b8ce58b359] Running
	I1001 20:34:58.776438   68418 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-878552" [5f2b8294-3146-4996-8a92-69ae08803d55] Running
	I1001 20:34:58.776443   68418 system_pods.go:89] "kube-proxy-272ln" [9f2e367f-34c7-4117-bd8e-62b5aa58c7b5] Running
	I1001 20:34:58.776449   68418 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-878552" [91e886e5-8452-4fe2-8be8-7705eeed5073] Running
	I1001 20:34:58.776456   68418 system_pods.go:89] "metrics-server-6867b74b74-75m4s" [c8eb4eb3-ea7f-4a88-8c1f-e3331f44ae53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 20:34:58.776463   68418 system_pods.go:89] "storage-provisioner" [bfc9ed28-f04b-4e57-b8c0-f41849e1fc25] Running
	I1001 20:34:58.776471   68418 system_pods.go:126] duration metric: took 202.174994ms to wait for k8s-apps to be running ...
	I1001 20:34:58.776481   68418 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 20:34:58.776526   68418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:34:58.791729   68418 system_svc.go:56] duration metric: took 15.241394ms WaitForService to wait for kubelet
	I1001 20:34:58.791758   68418 kubeadm.go:582] duration metric: took 8.809278003s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:34:58.791774   68418 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:34:58.976076   68418 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:34:58.976102   68418 node_conditions.go:123] node cpu capacity is 2
	I1001 20:34:58.976115   68418 node_conditions.go:105] duration metric: took 184.336121ms to run NodePressure ...
	I1001 20:34:58.976127   68418 start.go:241] waiting for startup goroutines ...
	I1001 20:34:58.976136   68418 start.go:246] waiting for cluster config update ...
	I1001 20:34:58.976149   68418 start.go:255] writing updated cluster config ...
	I1001 20:34:58.976450   68418 ssh_runner.go:195] Run: rm -f paused
	I1001 20:34:59.026367   68418 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 20:34:59.029055   68418 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-878552" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 20:40:17 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:17.968507937Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815217968476509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43edadf3-48c1-436f-b4c3-138ec6794333 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:40:17 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:17.968993888Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a60667c-ccff-4f4f-8905-1e99ad9de721 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:40:17 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:17.969074268Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a60667c-ccff-4f4f-8905-1e99ad9de721 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:40:17 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:17.969108165Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6a60667c-ccff-4f4f-8905-1e99ad9de721 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:40:18 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:18.003080877Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7430663a-793b-4543-b925-83951f0d8496 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:40:18 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:18.003215582Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7430663a-793b-4543-b925-83951f0d8496 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:40:18 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:18.004411884Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6fa4083f-52d4-435a-b587-433c01a50aa7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:40:18 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:18.004799807Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815218004773263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6fa4083f-52d4-435a-b587-433c01a50aa7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:40:18 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:18.005354158Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5af0fb33-78aa-41a6-a0ec-1d7be4d51f65 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:40:18 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:18.005428734Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5af0fb33-78aa-41a6-a0ec-1d7be4d51f65 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:40:18 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:18.005463489Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5af0fb33-78aa-41a6-a0ec-1d7be4d51f65 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:40:18 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:18.039698121Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e3bdb42-e679-405d-91e3-4c30d281d1b9 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:40:18 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:18.039798383Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e3bdb42-e679-405d-91e3-4c30d281d1b9 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:40:18 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:18.041234491Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=164ee921-67c1-4f99-bfcd-b0b7758d6d72 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:40:18 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:18.041618260Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815218041596114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=164ee921-67c1-4f99-bfcd-b0b7758d6d72 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:40:18 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:18.042421072Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a179afc0-8bb5-4430-a580-2ca4bc706857 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:40:18 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:18.042492739Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a179afc0-8bb5-4430-a580-2ca4bc706857 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:40:18 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:18.042528424Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a179afc0-8bb5-4430-a580-2ca4bc706857 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:40:18 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:18.080947966Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=44d76150-e4e4-49ac-8a20-f63ed0388072 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:40:18 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:18.081037820Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=44d76150-e4e4-49ac-8a20-f63ed0388072 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:40:18 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:18.083093518Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c916ed9c-0117-4239-98e0-b4c8597b2f44 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:40:18 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:18.083560706Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815218083535495,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c916ed9c-0117-4239-98e0-b4c8597b2f44 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:40:18 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:18.084238282Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=743e3049-1aa9-4187-90ea-80cda87c03a4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:40:18 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:18.084307963Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=743e3049-1aa9-4187-90ea-80cda87c03a4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:40:18 old-k8s-version-359369 crio[632]: time="2024-10-01 20:40:18.084345834Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=743e3049-1aa9-4187-90ea-80cda87c03a4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 1 20:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.061451] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043514] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.028959] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.047745] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.355137] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.538724] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.065709] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077031] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.174087] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.145035] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.248393] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +6.785134] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.069182] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.078495] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[ +11.012728] kauditd_printk_skb: 46 callbacks suppressed
	[Oct 1 20:25] systemd-fstab-generator[5075]: Ignoring "noauto" option for root device
	[Oct 1 20:27] systemd-fstab-generator[5356]: Ignoring "noauto" option for root device
	[  +0.061063] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:40:18 up 19 min,  0 users,  load average: 0.03, 0.05, 0.00
	Linux old-k8s-version-359369 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 01 20:40:13 old-k8s-version-359369 kubelet[6765]: net/http.(*Transport).dialConn(0xc0007d5180, 0x4f7fe00, 0xc000120018, 0x0, 0xc000af9ce0, 0x5, 0xc000c20300, 0x24, 0x0, 0xc000af39e0, ...)
	Oct 01 20:40:13 old-k8s-version-359369 kubelet[6765]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Oct 01 20:40:13 old-k8s-version-359369 kubelet[6765]: net/http.(*Transport).dialConnFor(0xc0007d5180, 0xc000b87e40)
	Oct 01 20:40:13 old-k8s-version-359369 kubelet[6765]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Oct 01 20:40:13 old-k8s-version-359369 kubelet[6765]: created by net/http.(*Transport).queueForDial
	Oct 01 20:40:13 old-k8s-version-359369 kubelet[6765]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Oct 01 20:40:13 old-k8s-version-359369 kubelet[6765]: goroutine 153 [select]:
	Oct 01 20:40:13 old-k8s-version-359369 kubelet[6765]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000c0e5f0, 0xc000c42e01, 0xc000aada00, 0xc000bd5650, 0xc000acbe40, 0xc000acbe00)
	Oct 01 20:40:13 old-k8s-version-359369 kubelet[6765]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Oct 01 20:40:13 old-k8s-version-359369 kubelet[6765]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000c42e40, 0x0, 0x0)
	Oct 01 20:40:13 old-k8s-version-359369 kubelet[6765]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Oct 01 20:40:13 old-k8s-version-359369 kubelet[6765]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0001cd880)
	Oct 01 20:40:13 old-k8s-version-359369 kubelet[6765]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Oct 01 20:40:13 old-k8s-version-359369 kubelet[6765]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Oct 01 20:40:13 old-k8s-version-359369 kubelet[6765]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Oct 01 20:40:13 old-k8s-version-359369 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 01 20:40:13 old-k8s-version-359369 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 01 20:40:13 old-k8s-version-359369 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 130.
	Oct 01 20:40:13 old-k8s-version-359369 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 01 20:40:13 old-k8s-version-359369 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 01 20:40:13 old-k8s-version-359369 kubelet[6774]: I1001 20:40:13.953234    6774 server.go:416] Version: v1.20.0
	Oct 01 20:40:13 old-k8s-version-359369 kubelet[6774]: I1001 20:40:13.954049    6774 server.go:837] Client rotation is on, will bootstrap in background
	Oct 01 20:40:13 old-k8s-version-359369 kubelet[6774]: I1001 20:40:13.956610    6774 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 01 20:40:13 old-k8s-version-359369 kubelet[6774]: I1001 20:40:13.958049    6774 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Oct 01 20:40:13 old-k8s-version-359369 kubelet[6774]: W1001 20:40:13.958230    6774 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-359369 -n old-k8s-version-359369
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-359369 -n old-k8s-version-359369: exit status 2 (225.988926ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-359369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (93.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (138.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-878552 -n default-k8s-diff-port-878552
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-01 20:46:18.807177027 +0000 UTC m=+6729.755980414
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-878552 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-878552 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.935µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-878552 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-878552 -n default-k8s-diff-port-878552
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-878552 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-878552 logs -n 25: (1.316098834s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-983557                         | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC | 01 Oct 24 20:45 UTC |
	|         | sudo systemctl status kubelet                        |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-983557                         | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC | 01 Oct 24 20:45 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-983557                         | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC | 01 Oct 24 20:45 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-983557                         | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC | 01 Oct 24 20:45 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-983557                         | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC | 01 Oct 24 20:45 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-983557                         | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-983557                         | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC | 01 Oct 24 20:45 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-983557                         | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC | 01 Oct 24 20:45 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-983557                         | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-983557                         | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-983557                         | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC | 01 Oct 24 20:45 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-983557 sudo cat                | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-983557 sudo cat                | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC | 01 Oct 24 20:45 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-983557                         | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC | 01 Oct 24 20:45 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-983557                         | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-983557                         | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC | 01 Oct 24 20:45 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-983557 sudo cat                | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC | 01 Oct 24 20:45 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-983557                         | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC | 01 Oct 24 20:45 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-983557                         | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC | 01 Oct 24 20:45 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-983557                         | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC | 01 Oct 24 20:45 UTC |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-983557                         | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC | 01 Oct 24 20:45 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-983557                         | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC | 01 Oct 24 20:45 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-983557                         | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC | 01 Oct 24 20:45 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-983557                         | enable-default-cni-983557 | jenkins | v1.34.0 | 01 Oct 24 20:45 UTC | 01 Oct 24 20:45 UTC |
	| ssh     | -p flannel-983557 pgrep -a                           | flannel-983557            | jenkins | v1.34.0 | 01 Oct 24 20:46 UTC | 01 Oct 24 20:46 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 20:45:05
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 20:45:05.065403   81817 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:45:05.065719   81817 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:45:05.065734   81817 out.go:358] Setting ErrFile to fd 2...
	I1001 20:45:05.065741   81817 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:45:05.066047   81817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 20:45:05.066840   81817 out.go:352] Setting JSON to false
	I1001 20:45:05.068091   81817 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8847,"bootTime":1727806658,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 20:45:05.068187   81817 start.go:139] virtualization: kvm guest
	I1001 20:45:05.070224   81817 out.go:177] * [bridge-983557] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 20:45:05.071595   81817 notify.go:220] Checking for updates...
	I1001 20:45:05.071610   81817 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 20:45:05.073166   81817 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 20:45:05.074488   81817 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:45:05.075943   81817 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:45:05.077202   81817 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 20:45:05.078509   81817 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 20:45:05.080166   81817 config.go:182] Loaded profile config "default-k8s-diff-port-878552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:45:05.080256   81817 config.go:182] Loaded profile config "enable-default-cni-983557": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:45:05.080583   81817 config.go:182] Loaded profile config "flannel-983557": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:45:05.080716   81817 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 20:45:05.123965   81817 out.go:177] * Using the kvm2 driver based on user configuration
	I1001 20:45:05.125304   81817 start.go:297] selected driver: kvm2
	I1001 20:45:05.125322   81817 start.go:901] validating driver "kvm2" against <nil>
	I1001 20:45:05.125335   81817 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 20:45:05.126044   81817 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:45:05.126121   81817 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 20:45:05.142038   81817 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 20:45:05.142089   81817 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 20:45:05.142355   81817 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:45:05.142385   81817 cni.go:84] Creating CNI manager for "bridge"
	I1001 20:45:05.142392   81817 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 20:45:05.142467   81817 start.go:340] cluster config:
	{Name:bridge-983557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-983557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:45:05.142613   81817 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:45:05.144079   81817 out.go:177] * Starting "bridge-983557" primary control-plane node in "bridge-983557" cluster
	I1001 20:45:02.315893   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:02.316509   80210 main.go:141] libmachine: (flannel-983557) DBG | unable to find current IP address of domain flannel-983557 in network mk-flannel-983557
	I1001 20:45:02.316551   80210 main.go:141] libmachine: (flannel-983557) DBG | I1001 20:45:02.316450   80232 retry.go:31] will retry after 3.077884004s: waiting for machine to come up
	I1001 20:45:05.397824   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:05.398271   80210 main.go:141] libmachine: (flannel-983557) DBG | unable to find current IP address of domain flannel-983557 in network mk-flannel-983557
	I1001 20:45:05.398287   80210 main.go:141] libmachine: (flannel-983557) DBG | I1001 20:45:05.398241   80232 retry.go:31] will retry after 4.442203394s: waiting for machine to come up
	I1001 20:45:04.348183   78160 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgnln" in "kube-system" namespace has status "Ready":"False"
	I1001 20:45:06.843520   78160 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgnln" in "kube-system" namespace has status "Ready":"False"
	I1001 20:45:05.145137   81817 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 20:45:05.145175   81817 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 20:45:05.145191   81817 cache.go:56] Caching tarball of preloaded images
	I1001 20:45:05.145274   81817 preload.go:172] Found /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 20:45:05.145285   81817 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 20:45:05.145384   81817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/config.json ...
	I1001 20:45:05.145403   81817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/config.json: {Name:mk27433ea02f9c76177a9c5b41b6dbf45189ea0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:45:05.145563   81817 start.go:360] acquireMachinesLock for bridge-983557: {Name:mk0da9f9e72785b38d21a4ec663aa8aa42710456 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 20:45:09.845559   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:09.846239   80210 main.go:141] libmachine: (flannel-983557) Found IP for machine: 192.168.39.251
	I1001 20:45:09.846277   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has current primary IP address 192.168.39.251 and MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:09.846292   80210 main.go:141] libmachine: (flannel-983557) Reserving static IP address...
	I1001 20:45:09.846815   80210 main.go:141] libmachine: (flannel-983557) DBG | unable to find host DHCP lease matching {name: "flannel-983557", mac: "52:54:00:33:dd:e9", ip: "192.168.39.251"} in network mk-flannel-983557
	I1001 20:45:09.941239   80210 main.go:141] libmachine: (flannel-983557) Reserved static IP address: 192.168.39.251
	I1001 20:45:09.941267   80210 main.go:141] libmachine: (flannel-983557) Waiting for SSH to be available...
	I1001 20:45:09.941276   80210 main.go:141] libmachine: (flannel-983557) DBG | Getting to WaitForSSH function...
	I1001 20:45:09.943941   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:09.944447   80210 main.go:141] libmachine: (flannel-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:dd:e9", ip: ""} in network mk-flannel-983557: {Iface:virbr1 ExpiryTime:2024-10-01 21:45:00 +0000 UTC Type:0 Mac:52:54:00:33:dd:e9 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:minikube Clientid:01:52:54:00:33:dd:e9}
	I1001 20:45:09.944481   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined IP address 192.168.39.251 and MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:09.944660   80210 main.go:141] libmachine: (flannel-983557) DBG | Using SSH client type: external
	I1001 20:45:09.944684   80210 main.go:141] libmachine: (flannel-983557) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/flannel-983557/id_rsa (-rw-------)
	I1001 20:45:09.944725   80210 main.go:141] libmachine: (flannel-983557) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/flannel-983557/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 20:45:09.944741   80210 main.go:141] libmachine: (flannel-983557) DBG | About to run SSH command:
	I1001 20:45:09.944757   80210 main.go:141] libmachine: (flannel-983557) DBG | exit 0
	I1001 20:45:10.068959   80210 main.go:141] libmachine: (flannel-983557) DBG | SSH cmd err, output: <nil>: 
	I1001 20:45:10.069336   80210 main.go:141] libmachine: (flannel-983557) KVM machine creation complete!
	I1001 20:45:10.069667   80210 main.go:141] libmachine: (flannel-983557) Calling .GetConfigRaw
	I1001 20:45:10.070780   80210 main.go:141] libmachine: (flannel-983557) Calling .DriverName
	I1001 20:45:10.071065   80210 main.go:141] libmachine: (flannel-983557) Calling .DriverName
	I1001 20:45:10.071300   80210 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 20:45:10.071321   80210 main.go:141] libmachine: (flannel-983557) Calling .GetState
	I1001 20:45:10.073148   80210 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 20:45:10.073167   80210 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 20:45:10.073199   80210 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 20:45:10.073211   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHHostname
	I1001 20:45:10.076064   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:10.076530   80210 main.go:141] libmachine: (flannel-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:dd:e9", ip: ""} in network mk-flannel-983557: {Iface:virbr1 ExpiryTime:2024-10-01 21:45:00 +0000 UTC Type:0 Mac:52:54:00:33:dd:e9 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:flannel-983557 Clientid:01:52:54:00:33:dd:e9}
	I1001 20:45:10.076574   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined IP address 192.168.39.251 and MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:10.076772   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHPort
	I1001 20:45:10.076963   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHKeyPath
	I1001 20:45:10.077138   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHKeyPath
	I1001 20:45:10.077273   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHUsername
	I1001 20:45:10.077431   80210 main.go:141] libmachine: Using SSH client type: native
	I1001 20:45:10.077626   80210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 20:45:10.077641   80210 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 20:45:10.179913   80210 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:45:10.179937   80210 main.go:141] libmachine: Detecting the provisioner...
	I1001 20:45:10.179946   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHHostname
	I1001 20:45:10.183508   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:10.183926   80210 main.go:141] libmachine: (flannel-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:dd:e9", ip: ""} in network mk-flannel-983557: {Iface:virbr1 ExpiryTime:2024-10-01 21:45:00 +0000 UTC Type:0 Mac:52:54:00:33:dd:e9 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:flannel-983557 Clientid:01:52:54:00:33:dd:e9}
	I1001 20:45:10.183975   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined IP address 192.168.39.251 and MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:10.184213   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHPort
	I1001 20:45:10.184496   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHKeyPath
	I1001 20:45:10.184669   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHKeyPath
	I1001 20:45:10.184838   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHUsername
	I1001 20:45:10.185070   80210 main.go:141] libmachine: Using SSH client type: native
	I1001 20:45:10.185237   80210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 20:45:10.185247   80210 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 20:45:10.289059   80210 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 20:45:10.289191   80210 main.go:141] libmachine: found compatible host: buildroot
	I1001 20:45:10.289207   80210 main.go:141] libmachine: Provisioning with buildroot...
	I1001 20:45:10.289217   80210 main.go:141] libmachine: (flannel-983557) Calling .GetMachineName
	I1001 20:45:10.289549   80210 buildroot.go:166] provisioning hostname "flannel-983557"
	I1001 20:45:10.289574   80210 main.go:141] libmachine: (flannel-983557) Calling .GetMachineName
	I1001 20:45:10.289781   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHHostname
	I1001 20:45:10.292704   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:10.293229   80210 main.go:141] libmachine: (flannel-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:dd:e9", ip: ""} in network mk-flannel-983557: {Iface:virbr1 ExpiryTime:2024-10-01 21:45:00 +0000 UTC Type:0 Mac:52:54:00:33:dd:e9 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:flannel-983557 Clientid:01:52:54:00:33:dd:e9}
	I1001 20:45:10.293256   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined IP address 192.168.39.251 and MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:10.293431   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHPort
	I1001 20:45:10.293641   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHKeyPath
	I1001 20:45:10.293804   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHKeyPath
	I1001 20:45:10.293970   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHUsername
	I1001 20:45:10.294144   80210 main.go:141] libmachine: Using SSH client type: native
	I1001 20:45:10.294334   80210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 20:45:10.294352   80210 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-983557 && echo "flannel-983557" | sudo tee /etc/hostname
	I1001 20:45:10.411292   80210 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-983557
	
	I1001 20:45:10.411328   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHHostname
	I1001 20:45:10.415014   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:10.415423   80210 main.go:141] libmachine: (flannel-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:dd:e9", ip: ""} in network mk-flannel-983557: {Iface:virbr1 ExpiryTime:2024-10-01 21:45:00 +0000 UTC Type:0 Mac:52:54:00:33:dd:e9 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:flannel-983557 Clientid:01:52:54:00:33:dd:e9}
	I1001 20:45:10.415459   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined IP address 192.168.39.251 and MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:10.415736   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHPort
	I1001 20:45:10.415965   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHKeyPath
	I1001 20:45:10.416150   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHKeyPath
	I1001 20:45:10.416308   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHUsername
	I1001 20:45:10.416461   80210 main.go:141] libmachine: Using SSH client type: native
	I1001 20:45:10.416682   80210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 20:45:10.416706   80210 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-983557' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-983557/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-983557' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 20:45:11.465690   81817 start.go:364] duration metric: took 6.320089329s to acquireMachinesLock for "bridge-983557"
	I1001 20:45:11.465797   81817 start.go:93] Provisioning new machine with config: &{Name:bridge-983557 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-983557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 20:45:11.465906   81817 start.go:125] createHost starting for "" (driver="kvm2")
	I1001 20:45:08.844047   78160 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgnln" in "kube-system" namespace has status "Ready":"False"
	I1001 20:45:10.845297   78160 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgnln" in "kube-system" namespace has status "Ready":"False"
	I1001 20:45:10.531196   80210 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:45:10.531227   80210 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 20:45:10.531266   80210 buildroot.go:174] setting up certificates
	I1001 20:45:10.531276   80210 provision.go:84] configureAuth start
	I1001 20:45:10.531285   80210 main.go:141] libmachine: (flannel-983557) Calling .GetMachineName
	I1001 20:45:10.531584   80210 main.go:141] libmachine: (flannel-983557) Calling .GetIP
	I1001 20:45:10.534720   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:10.535168   80210 main.go:141] libmachine: (flannel-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:dd:e9", ip: ""} in network mk-flannel-983557: {Iface:virbr1 ExpiryTime:2024-10-01 21:45:00 +0000 UTC Type:0 Mac:52:54:00:33:dd:e9 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:flannel-983557 Clientid:01:52:54:00:33:dd:e9}
	I1001 20:45:10.535205   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined IP address 192.168.39.251 and MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:10.535405   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHHostname
	I1001 20:45:10.538193   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:10.538586   80210 main.go:141] libmachine: (flannel-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:dd:e9", ip: ""} in network mk-flannel-983557: {Iface:virbr1 ExpiryTime:2024-10-01 21:45:00 +0000 UTC Type:0 Mac:52:54:00:33:dd:e9 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:flannel-983557 Clientid:01:52:54:00:33:dd:e9}
	I1001 20:45:10.538616   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined IP address 192.168.39.251 and MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:10.538754   80210 provision.go:143] copyHostCerts
	I1001 20:45:10.538809   80210 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 20:45:10.538819   80210 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 20:45:10.538903   80210 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 20:45:10.539040   80210 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 20:45:10.539050   80210 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 20:45:10.539085   80210 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 20:45:10.539158   80210 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 20:45:10.539166   80210 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 20:45:10.539189   80210 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 20:45:10.539251   80210 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.flannel-983557 san=[127.0.0.1 192.168.39.251 flannel-983557 localhost minikube]
	I1001 20:45:10.829876   80210 provision.go:177] copyRemoteCerts
	I1001 20:45:10.829932   80210 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 20:45:10.829954   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHHostname
	I1001 20:45:10.832887   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:10.833248   80210 main.go:141] libmachine: (flannel-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:dd:e9", ip: ""} in network mk-flannel-983557: {Iface:virbr1 ExpiryTime:2024-10-01 21:45:00 +0000 UTC Type:0 Mac:52:54:00:33:dd:e9 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:flannel-983557 Clientid:01:52:54:00:33:dd:e9}
	I1001 20:45:10.833273   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined IP address 192.168.39.251 and MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:10.833423   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHPort
	I1001 20:45:10.833619   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHKeyPath
	I1001 20:45:10.833771   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHUsername
	I1001 20:45:10.833942   80210 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/flannel-983557/id_rsa Username:docker}
	I1001 20:45:10.919027   80210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 20:45:10.944685   80210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1001 20:45:10.969053   80210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 20:45:10.993486   80210 provision.go:87] duration metric: took 462.197382ms to configureAuth
	I1001 20:45:10.993513   80210 buildroot.go:189] setting minikube options for container-runtime
	I1001 20:45:10.993674   80210 config.go:182] Loaded profile config "flannel-983557": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:45:10.993747   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHHostname
	I1001 20:45:10.996891   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:10.997266   80210 main.go:141] libmachine: (flannel-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:dd:e9", ip: ""} in network mk-flannel-983557: {Iface:virbr1 ExpiryTime:2024-10-01 21:45:00 +0000 UTC Type:0 Mac:52:54:00:33:dd:e9 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:flannel-983557 Clientid:01:52:54:00:33:dd:e9}
	I1001 20:45:10.997295   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined IP address 192.168.39.251 and MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:10.997493   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHPort
	I1001 20:45:10.997693   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHKeyPath
	I1001 20:45:10.997855   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHKeyPath
	I1001 20:45:10.997972   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHUsername
	I1001 20:45:10.998122   80210 main.go:141] libmachine: Using SSH client type: native
	I1001 20:45:10.998312   80210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 20:45:10.998335   80210 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 20:45:11.220076   80210 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 20:45:11.220104   80210 main.go:141] libmachine: Checking connection to Docker...
	I1001 20:45:11.220113   80210 main.go:141] libmachine: (flannel-983557) Calling .GetURL
	I1001 20:45:11.221627   80210 main.go:141] libmachine: (flannel-983557) DBG | Using libvirt version 6000000
	I1001 20:45:11.224397   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:11.224761   80210 main.go:141] libmachine: (flannel-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:dd:e9", ip: ""} in network mk-flannel-983557: {Iface:virbr1 ExpiryTime:2024-10-01 21:45:00 +0000 UTC Type:0 Mac:52:54:00:33:dd:e9 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:flannel-983557 Clientid:01:52:54:00:33:dd:e9}
	I1001 20:45:11.224789   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined IP address 192.168.39.251 and MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:11.224958   80210 main.go:141] libmachine: Docker is up and running!
	I1001 20:45:11.224971   80210 main.go:141] libmachine: Reticulating splines...
	I1001 20:45:11.224978   80210 client.go:171] duration metric: took 25.628865737s to LocalClient.Create
	I1001 20:45:11.224998   80210 start.go:167] duration metric: took 25.628927432s to libmachine.API.Create "flannel-983557"
	I1001 20:45:11.225008   80210 start.go:293] postStartSetup for "flannel-983557" (driver="kvm2")
	I1001 20:45:11.225020   80210 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 20:45:11.225043   80210 main.go:141] libmachine: (flannel-983557) Calling .DriverName
	I1001 20:45:11.225260   80210 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 20:45:11.225295   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHHostname
	I1001 20:45:11.227730   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:11.228051   80210 main.go:141] libmachine: (flannel-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:dd:e9", ip: ""} in network mk-flannel-983557: {Iface:virbr1 ExpiryTime:2024-10-01 21:45:00 +0000 UTC Type:0 Mac:52:54:00:33:dd:e9 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:flannel-983557 Clientid:01:52:54:00:33:dd:e9}
	I1001 20:45:11.228081   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined IP address 192.168.39.251 and MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:11.228212   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHPort
	I1001 20:45:11.228425   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHKeyPath
	I1001 20:45:11.228565   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHUsername
	I1001 20:45:11.228705   80210 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/flannel-983557/id_rsa Username:docker}
	I1001 20:45:11.311098   80210 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 20:45:11.315890   80210 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 20:45:11.315921   80210 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 20:45:11.315989   80210 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 20:45:11.316079   80210 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 20:45:11.316180   80210 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 20:45:11.326154   80210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:45:11.354358   80210 start.go:296] duration metric: took 129.336926ms for postStartSetup
	I1001 20:45:11.354432   80210 main.go:141] libmachine: (flannel-983557) Calling .GetConfigRaw
	I1001 20:45:11.355167   80210 main.go:141] libmachine: (flannel-983557) Calling .GetIP
	I1001 20:45:11.358557   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:11.359010   80210 main.go:141] libmachine: (flannel-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:dd:e9", ip: ""} in network mk-flannel-983557: {Iface:virbr1 ExpiryTime:2024-10-01 21:45:00 +0000 UTC Type:0 Mac:52:54:00:33:dd:e9 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:flannel-983557 Clientid:01:52:54:00:33:dd:e9}
	I1001 20:45:11.359043   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined IP address 192.168.39.251 and MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:11.359396   80210 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/config.json ...
	I1001 20:45:11.359673   80210 start.go:128] duration metric: took 25.782351984s to createHost
	I1001 20:45:11.359706   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHHostname
	I1001 20:45:11.362870   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:11.363350   80210 main.go:141] libmachine: (flannel-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:dd:e9", ip: ""} in network mk-flannel-983557: {Iface:virbr1 ExpiryTime:2024-10-01 21:45:00 +0000 UTC Type:0 Mac:52:54:00:33:dd:e9 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:flannel-983557 Clientid:01:52:54:00:33:dd:e9}
	I1001 20:45:11.363380   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined IP address 192.168.39.251 and MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:11.363580   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHPort
	I1001 20:45:11.363787   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHKeyPath
	I1001 20:45:11.363992   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHKeyPath
	I1001 20:45:11.364118   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHUsername
	I1001 20:45:11.364259   80210 main.go:141] libmachine: Using SSH client type: native
	I1001 20:45:11.364476   80210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 20:45:11.364494   80210 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 20:45:11.465522   80210 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727815511.443196803
	
	I1001 20:45:11.465545   80210 fix.go:216] guest clock: 1727815511.443196803
	I1001 20:45:11.465562   80210 fix.go:229] Guest: 2024-10-01 20:45:11.443196803 +0000 UTC Remote: 2024-10-01 20:45:11.359690858 +0000 UTC m=+25.897464382 (delta=83.505945ms)
	I1001 20:45:11.465594   80210 fix.go:200] guest clock delta is within tolerance: 83.505945ms
	I1001 20:45:11.465604   80210 start.go:83] releasing machines lock for "flannel-983557", held for 25.888377995s
	I1001 20:45:11.465635   80210 main.go:141] libmachine: (flannel-983557) Calling .DriverName
	I1001 20:45:11.465947   80210 main.go:141] libmachine: (flannel-983557) Calling .GetIP
	I1001 20:45:11.468782   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:11.469176   80210 main.go:141] libmachine: (flannel-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:dd:e9", ip: ""} in network mk-flannel-983557: {Iface:virbr1 ExpiryTime:2024-10-01 21:45:00 +0000 UTC Type:0 Mac:52:54:00:33:dd:e9 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:flannel-983557 Clientid:01:52:54:00:33:dd:e9}
	I1001 20:45:11.469210   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined IP address 192.168.39.251 and MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:11.469439   80210 main.go:141] libmachine: (flannel-983557) Calling .DriverName
	I1001 20:45:11.470003   80210 main.go:141] libmachine: (flannel-983557) Calling .DriverName
	I1001 20:45:11.470184   80210 main.go:141] libmachine: (flannel-983557) Calling .DriverName
	I1001 20:45:11.470281   80210 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 20:45:11.470321   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHHostname
	I1001 20:45:11.470445   80210 ssh_runner.go:195] Run: cat /version.json
	I1001 20:45:11.470469   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHHostname
	I1001 20:45:11.473374   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:11.473725   80210 main.go:141] libmachine: (flannel-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:dd:e9", ip: ""} in network mk-flannel-983557: {Iface:virbr1 ExpiryTime:2024-10-01 21:45:00 +0000 UTC Type:0 Mac:52:54:00:33:dd:e9 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:flannel-983557 Clientid:01:52:54:00:33:dd:e9}
	I1001 20:45:11.473749   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined IP address 192.168.39.251 and MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:11.473769   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:11.473845   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHPort
	I1001 20:45:11.474019   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHKeyPath
	I1001 20:45:11.474142   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHUsername
	I1001 20:45:11.474257   80210 main.go:141] libmachine: (flannel-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:dd:e9", ip: ""} in network mk-flannel-983557: {Iface:virbr1 ExpiryTime:2024-10-01 21:45:00 +0000 UTC Type:0 Mac:52:54:00:33:dd:e9 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:flannel-983557 Clientid:01:52:54:00:33:dd:e9}
	I1001 20:45:11.474260   80210 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/flannel-983557/id_rsa Username:docker}
	I1001 20:45:11.474282   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined IP address 192.168.39.251 and MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:11.474477   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHPort
	I1001 20:45:11.474618   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHKeyPath
	I1001 20:45:11.474749   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHUsername
	I1001 20:45:11.474868   80210 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/flannel-983557/id_rsa Username:docker}
	I1001 20:45:11.594266   80210 ssh_runner.go:195] Run: systemctl --version
	I1001 20:45:11.602070   80210 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 20:45:11.773270   80210 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 20:45:11.780047   80210 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 20:45:11.780134   80210 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 20:45:11.801046   80210 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 20:45:11.801069   80210 start.go:495] detecting cgroup driver to use...
	I1001 20:45:11.801127   80210 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 20:45:11.818798   80210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 20:45:11.834527   80210 docker.go:217] disabling cri-docker service (if available) ...
	I1001 20:45:11.834620   80210 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 20:45:11.850249   80210 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 20:45:11.865781   80210 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 20:45:11.991532   80210 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 20:45:12.158708   80210 docker.go:233] disabling docker service ...
	I1001 20:45:12.158780   80210 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 20:45:12.178283   80210 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 20:45:12.192183   80210 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 20:45:12.332005   80210 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 20:45:12.485953   80210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 20:45:12.500646   80210 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 20:45:12.523345   80210 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 20:45:12.523421   80210 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:45:12.535554   80210 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 20:45:12.535620   80210 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:45:12.545762   80210 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:45:12.556505   80210 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:45:12.567896   80210 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 20:45:12.580222   80210 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:45:12.592062   80210 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:45:12.612969   80210 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:45:12.625524   80210 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 20:45:12.635167   80210 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 20:45:12.635228   80210 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 20:45:12.650279   80210 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 20:45:12.661324   80210 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:45:12.790150   80210 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 20:45:12.902252   80210 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 20:45:12.902334   80210 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 20:45:12.908612   80210 start.go:563] Will wait 60s for crictl version
	I1001 20:45:12.908677   80210 ssh_runner.go:195] Run: which crictl
	I1001 20:45:12.912688   80210 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 20:45:12.965324   80210 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 20:45:12.965419   80210 ssh_runner.go:195] Run: crio --version
	I1001 20:45:12.997022   80210 ssh_runner.go:195] Run: crio --version
	I1001 20:45:13.029907   80210 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 20:45:11.469106   81817 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 20:45:11.469328   81817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:45:11.469381   81817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:45:11.485941   81817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36151
	I1001 20:45:11.486487   81817 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:45:11.487219   81817 main.go:141] libmachine: Using API Version  1
	I1001 20:45:11.487238   81817 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:45:11.487700   81817 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:45:11.487888   81817 main.go:141] libmachine: (bridge-983557) Calling .GetMachineName
	I1001 20:45:11.488102   81817 main.go:141] libmachine: (bridge-983557) Calling .DriverName
	I1001 20:45:11.488266   81817 start.go:159] libmachine.API.Create for "bridge-983557" (driver="kvm2")
	I1001 20:45:11.488301   81817 client.go:168] LocalClient.Create starting
	I1001 20:45:11.488342   81817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem
	I1001 20:45:11.488404   81817 main.go:141] libmachine: Decoding PEM data...
	I1001 20:45:11.488428   81817 main.go:141] libmachine: Parsing certificate...
	I1001 20:45:11.488565   81817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem
	I1001 20:45:11.488614   81817 main.go:141] libmachine: Decoding PEM data...
	I1001 20:45:11.488641   81817 main.go:141] libmachine: Parsing certificate...
	I1001 20:45:11.488672   81817 main.go:141] libmachine: Running pre-create checks...
	I1001 20:45:11.488688   81817 main.go:141] libmachine: (bridge-983557) Calling .PreCreateCheck
	I1001 20:45:11.489149   81817 main.go:141] libmachine: (bridge-983557) Calling .GetConfigRaw
	I1001 20:45:11.489618   81817 main.go:141] libmachine: Creating machine...
	I1001 20:45:11.489632   81817 main.go:141] libmachine: (bridge-983557) Calling .Create
	I1001 20:45:11.489809   81817 main.go:141] libmachine: (bridge-983557) Creating KVM machine...
	I1001 20:45:11.491476   81817 main.go:141] libmachine: (bridge-983557) DBG | found existing default KVM network
	I1001 20:45:11.493002   81817 main.go:141] libmachine: (bridge-983557) DBG | I1001 20:45:11.492820   81882 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:0f:ab:6a} reservation:<nil>}
	I1001 20:45:11.493819   81817 main.go:141] libmachine: (bridge-983557) DBG | I1001 20:45:11.493750   81882 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:59:db:fe} reservation:<nil>}
	I1001 20:45:11.494789   81817 main.go:141] libmachine: (bridge-983557) DBG | I1001 20:45:11.494681   81882 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:14:1a:0f} reservation:<nil>}
	I1001 20:45:11.496008   81817 main.go:141] libmachine: (bridge-983557) DBG | I1001 20:45:11.495886   81882 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003811c0}
	I1001 20:45:11.496032   81817 main.go:141] libmachine: (bridge-983557) DBG | created network xml: 
	I1001 20:45:11.496042   81817 main.go:141] libmachine: (bridge-983557) DBG | <network>
	I1001 20:45:11.496051   81817 main.go:141] libmachine: (bridge-983557) DBG |   <name>mk-bridge-983557</name>
	I1001 20:45:11.496059   81817 main.go:141] libmachine: (bridge-983557) DBG |   <dns enable='no'/>
	I1001 20:45:11.496068   81817 main.go:141] libmachine: (bridge-983557) DBG |   
	I1001 20:45:11.496080   81817 main.go:141] libmachine: (bridge-983557) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1001 20:45:11.496092   81817 main.go:141] libmachine: (bridge-983557) DBG |     <dhcp>
	I1001 20:45:11.496105   81817 main.go:141] libmachine: (bridge-983557) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1001 20:45:11.496115   81817 main.go:141] libmachine: (bridge-983557) DBG |     </dhcp>
	I1001 20:45:11.496124   81817 main.go:141] libmachine: (bridge-983557) DBG |   </ip>
	I1001 20:45:11.496133   81817 main.go:141] libmachine: (bridge-983557) DBG |   
	I1001 20:45:11.496143   81817 main.go:141] libmachine: (bridge-983557) DBG | </network>
	I1001 20:45:11.496152   81817 main.go:141] libmachine: (bridge-983557) DBG | 
	I1001 20:45:11.502272   81817 main.go:141] libmachine: (bridge-983557) DBG | trying to create private KVM network mk-bridge-983557 192.168.72.0/24...
	I1001 20:45:11.585144   81817 main.go:141] libmachine: (bridge-983557) DBG | private KVM network mk-bridge-983557 192.168.72.0/24 created
	I1001 20:45:11.585184   81817 main.go:141] libmachine: (bridge-983557) Setting up store path in /home/jenkins/minikube-integration/19736-11198/.minikube/machines/bridge-983557 ...
	I1001 20:45:11.585199   81817 main.go:141] libmachine: (bridge-983557) Building disk image from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 20:45:11.585210   81817 main.go:141] libmachine: (bridge-983557) DBG | I1001 20:45:11.585115   81882 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:45:11.585274   81817 main.go:141] libmachine: (bridge-983557) Downloading /home/jenkins/minikube-integration/19736-11198/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 20:45:11.879306   81817 main.go:141] libmachine: (bridge-983557) DBG | I1001 20:45:11.879176   81882 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/bridge-983557/id_rsa...
	I1001 20:45:12.053001   81817 main.go:141] libmachine: (bridge-983557) DBG | I1001 20:45:12.052846   81882 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/bridge-983557/bridge-983557.rawdisk...
	I1001 20:45:12.053041   81817 main.go:141] libmachine: (bridge-983557) DBG | Writing magic tar header
	I1001 20:45:12.053055   81817 main.go:141] libmachine: (bridge-983557) DBG | Writing SSH key tar header
	I1001 20:45:12.053065   81817 main.go:141] libmachine: (bridge-983557) DBG | I1001 20:45:12.053002   81882 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/bridge-983557 ...
	I1001 20:45:12.053177   81817 main.go:141] libmachine: (bridge-983557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/bridge-983557
	I1001 20:45:12.053200   81817 main.go:141] libmachine: (bridge-983557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube/machines
	I1001 20:45:12.053213   81817 main.go:141] libmachine: (bridge-983557) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines/bridge-983557 (perms=drwx------)
	I1001 20:45:12.053228   81817 main.go:141] libmachine: (bridge-983557) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube/machines (perms=drwxr-xr-x)
	I1001 20:45:12.053238   81817 main.go:141] libmachine: (bridge-983557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:45:12.053252   81817 main.go:141] libmachine: (bridge-983557) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198/.minikube (perms=drwxr-xr-x)
	I1001 20:45:12.053297   81817 main.go:141] libmachine: (bridge-983557) Setting executable bit set on /home/jenkins/minikube-integration/19736-11198 (perms=drwxrwxr-x)
	I1001 20:45:12.053314   81817 main.go:141] libmachine: (bridge-983557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19736-11198
	I1001 20:45:12.053323   81817 main.go:141] libmachine: (bridge-983557) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 20:45:12.053336   81817 main.go:141] libmachine: (bridge-983557) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 20:45:12.053346   81817 main.go:141] libmachine: (bridge-983557) Creating domain...
	I1001 20:45:12.053359   81817 main.go:141] libmachine: (bridge-983557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 20:45:12.053370   81817 main.go:141] libmachine: (bridge-983557) DBG | Checking permissions on dir: /home/jenkins
	I1001 20:45:12.053383   81817 main.go:141] libmachine: (bridge-983557) DBG | Checking permissions on dir: /home
	I1001 20:45:12.053393   81817 main.go:141] libmachine: (bridge-983557) DBG | Skipping /home - not owner
	I1001 20:45:12.054543   81817 main.go:141] libmachine: (bridge-983557) define libvirt domain using xml: 
	I1001 20:45:12.054567   81817 main.go:141] libmachine: (bridge-983557) <domain type='kvm'>
	I1001 20:45:12.054578   81817 main.go:141] libmachine: (bridge-983557)   <name>bridge-983557</name>
	I1001 20:45:12.054584   81817 main.go:141] libmachine: (bridge-983557)   <memory unit='MiB'>3072</memory>
	I1001 20:45:12.054592   81817 main.go:141] libmachine: (bridge-983557)   <vcpu>2</vcpu>
	I1001 20:45:12.054598   81817 main.go:141] libmachine: (bridge-983557)   <features>
	I1001 20:45:12.054612   81817 main.go:141] libmachine: (bridge-983557)     <acpi/>
	I1001 20:45:12.054625   81817 main.go:141] libmachine: (bridge-983557)     <apic/>
	I1001 20:45:12.054635   81817 main.go:141] libmachine: (bridge-983557)     <pae/>
	I1001 20:45:12.054641   81817 main.go:141] libmachine: (bridge-983557)     
	I1001 20:45:12.054649   81817 main.go:141] libmachine: (bridge-983557)   </features>
	I1001 20:45:12.054661   81817 main.go:141] libmachine: (bridge-983557)   <cpu mode='host-passthrough'>
	I1001 20:45:12.054671   81817 main.go:141] libmachine: (bridge-983557)   
	I1001 20:45:12.054680   81817 main.go:141] libmachine: (bridge-983557)   </cpu>
	I1001 20:45:12.054698   81817 main.go:141] libmachine: (bridge-983557)   <os>
	I1001 20:45:12.054708   81817 main.go:141] libmachine: (bridge-983557)     <type>hvm</type>
	I1001 20:45:12.054717   81817 main.go:141] libmachine: (bridge-983557)     <boot dev='cdrom'/>
	I1001 20:45:12.054727   81817 main.go:141] libmachine: (bridge-983557)     <boot dev='hd'/>
	I1001 20:45:12.054738   81817 main.go:141] libmachine: (bridge-983557)     <bootmenu enable='no'/>
	I1001 20:45:12.054760   81817 main.go:141] libmachine: (bridge-983557)   </os>
	I1001 20:45:12.054770   81817 main.go:141] libmachine: (bridge-983557)   <devices>
	I1001 20:45:12.054783   81817 main.go:141] libmachine: (bridge-983557)     <disk type='file' device='cdrom'>
	I1001 20:45:12.054807   81817 main.go:141] libmachine: (bridge-983557)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/bridge-983557/boot2docker.iso'/>
	I1001 20:45:12.054821   81817 main.go:141] libmachine: (bridge-983557)       <target dev='hdc' bus='scsi'/>
	I1001 20:45:12.054827   81817 main.go:141] libmachine: (bridge-983557)       <readonly/>
	I1001 20:45:12.054836   81817 main.go:141] libmachine: (bridge-983557)     </disk>
	I1001 20:45:12.054844   81817 main.go:141] libmachine: (bridge-983557)     <disk type='file' device='disk'>
	I1001 20:45:12.054858   81817 main.go:141] libmachine: (bridge-983557)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 20:45:12.054873   81817 main.go:141] libmachine: (bridge-983557)       <source file='/home/jenkins/minikube-integration/19736-11198/.minikube/machines/bridge-983557/bridge-983557.rawdisk'/>
	I1001 20:45:12.054885   81817 main.go:141] libmachine: (bridge-983557)       <target dev='hda' bus='virtio'/>
	I1001 20:45:12.054902   81817 main.go:141] libmachine: (bridge-983557)     </disk>
	I1001 20:45:12.054914   81817 main.go:141] libmachine: (bridge-983557)     <interface type='network'>
	I1001 20:45:12.054937   81817 main.go:141] libmachine: (bridge-983557)       <source network='mk-bridge-983557'/>
	I1001 20:45:12.054949   81817 main.go:141] libmachine: (bridge-983557)       <model type='virtio'/>
	I1001 20:45:12.054956   81817 main.go:141] libmachine: (bridge-983557)     </interface>
	I1001 20:45:12.054968   81817 main.go:141] libmachine: (bridge-983557)     <interface type='network'>
	I1001 20:45:12.054980   81817 main.go:141] libmachine: (bridge-983557)       <source network='default'/>
	I1001 20:45:12.054990   81817 main.go:141] libmachine: (bridge-983557)       <model type='virtio'/>
	I1001 20:45:12.054999   81817 main.go:141] libmachine: (bridge-983557)     </interface>
	I1001 20:45:12.055010   81817 main.go:141] libmachine: (bridge-983557)     <serial type='pty'>
	I1001 20:45:12.055024   81817 main.go:141] libmachine: (bridge-983557)       <target port='0'/>
	I1001 20:45:12.055035   81817 main.go:141] libmachine: (bridge-983557)     </serial>
	I1001 20:45:12.055043   81817 main.go:141] libmachine: (bridge-983557)     <console type='pty'>
	I1001 20:45:12.055055   81817 main.go:141] libmachine: (bridge-983557)       <target type='serial' port='0'/>
	I1001 20:45:12.055063   81817 main.go:141] libmachine: (bridge-983557)     </console>
	I1001 20:45:12.055074   81817 main.go:141] libmachine: (bridge-983557)     <rng model='virtio'>
	I1001 20:45:12.055085   81817 main.go:141] libmachine: (bridge-983557)       <backend model='random'>/dev/random</backend>
	I1001 20:45:12.055096   81817 main.go:141] libmachine: (bridge-983557)     </rng>
	I1001 20:45:12.055106   81817 main.go:141] libmachine: (bridge-983557)     
	I1001 20:45:12.055115   81817 main.go:141] libmachine: (bridge-983557)     
	I1001 20:45:12.055156   81817 main.go:141] libmachine: (bridge-983557)   </devices>
	I1001 20:45:12.055171   81817 main.go:141] libmachine: (bridge-983557) </domain>
	I1001 20:45:12.055178   81817 main.go:141] libmachine: (bridge-983557) 
	I1001 20:45:12.059748   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:72:a1:5a in network default
	I1001 20:45:12.060529   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:12.060550   81817 main.go:141] libmachine: (bridge-983557) Ensuring networks are active...
	I1001 20:45:12.061410   81817 main.go:141] libmachine: (bridge-983557) Ensuring network default is active
	I1001 20:45:12.061861   81817 main.go:141] libmachine: (bridge-983557) Ensuring network mk-bridge-983557 is active
	I1001 20:45:12.062479   81817 main.go:141] libmachine: (bridge-983557) Getting domain xml...
	I1001 20:45:12.063499   81817 main.go:141] libmachine: (bridge-983557) Creating domain...
	I1001 20:45:13.501079   81817 main.go:141] libmachine: (bridge-983557) Waiting to get IP...
	I1001 20:45:13.502232   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:13.502814   81817 main.go:141] libmachine: (bridge-983557) DBG | unable to find current IP address of domain bridge-983557 in network mk-bridge-983557
	I1001 20:45:13.502874   81817 main.go:141] libmachine: (bridge-983557) DBG | I1001 20:45:13.502789   81882 retry.go:31] will retry after 238.741223ms: waiting for machine to come up
	I1001 20:45:13.743333   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:13.744210   81817 main.go:141] libmachine: (bridge-983557) DBG | unable to find current IP address of domain bridge-983557 in network mk-bridge-983557
	I1001 20:45:13.744242   81817 main.go:141] libmachine: (bridge-983557) DBG | I1001 20:45:13.744185   81882 retry.go:31] will retry after 365.985544ms: waiting for machine to come up
	I1001 20:45:14.112141   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:14.112773   81817 main.go:141] libmachine: (bridge-983557) DBG | unable to find current IP address of domain bridge-983557 in network mk-bridge-983557
	I1001 20:45:14.112795   81817 main.go:141] libmachine: (bridge-983557) DBG | I1001 20:45:14.112685   81882 retry.go:31] will retry after 305.152704ms: waiting for machine to come up
	I1001 20:45:14.419263   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:14.419896   81817 main.go:141] libmachine: (bridge-983557) DBG | unable to find current IP address of domain bridge-983557 in network mk-bridge-983557
	I1001 20:45:14.419929   81817 main.go:141] libmachine: (bridge-983557) DBG | I1001 20:45:14.419758   81882 retry.go:31] will retry after 596.118125ms: waiting for machine to come up
	I1001 20:45:15.017609   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:15.018251   81817 main.go:141] libmachine: (bridge-983557) DBG | unable to find current IP address of domain bridge-983557 in network mk-bridge-983557
	I1001 20:45:15.018287   81817 main.go:141] libmachine: (bridge-983557) DBG | I1001 20:45:15.018183   81882 retry.go:31] will retry after 464.831431ms: waiting for machine to come up
	I1001 20:45:13.031328   80210 main.go:141] libmachine: (flannel-983557) Calling .GetIP
	I1001 20:45:13.035081   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:13.035556   80210 main.go:141] libmachine: (flannel-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:dd:e9", ip: ""} in network mk-flannel-983557: {Iface:virbr1 ExpiryTime:2024-10-01 21:45:00 +0000 UTC Type:0 Mac:52:54:00:33:dd:e9 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:flannel-983557 Clientid:01:52:54:00:33:dd:e9}
	I1001 20:45:13.035597   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined IP address 192.168.39.251 and MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:13.035825   80210 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 20:45:13.040391   80210 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:45:13.057479   80210 kubeadm.go:883] updating cluster {Name:flannel-983557 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:flannel-983557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 20:45:13.057602   80210 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 20:45:13.057661   80210 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:45:13.098015   80210 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1001 20:45:13.098084   80210 ssh_runner.go:195] Run: which lz4
	I1001 20:45:13.102445   80210 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 20:45:13.106546   80210 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 20:45:13.106589   80210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1001 20:45:14.562211   80210 crio.go:462] duration metric: took 1.459804101s to copy over tarball
	I1001 20:45:14.562324   80210 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 20:45:13.346570   78160 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgnln" in "kube-system" namespace has status "Ready":"False"
	I1001 20:45:15.846290   78160 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgnln" in "kube-system" namespace has status "Ready":"False"
	I1001 20:45:16.997632   80210 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.435276008s)
	I1001 20:45:16.997661   80210 crio.go:469] duration metric: took 2.435416452s to extract the tarball
	I1001 20:45:16.997670   80210 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 20:45:17.035885   80210 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:45:17.086437   80210 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 20:45:17.086464   80210 cache_images.go:84] Images are preloaded, skipping loading
	I1001 20:45:17.086475   80210 kubeadm.go:934] updating node { 192.168.39.251 8443 v1.31.1 crio true true} ...
	I1001 20:45:17.086597   80210 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-983557 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:flannel-983557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I1001 20:45:17.086691   80210 ssh_runner.go:195] Run: crio config
	I1001 20:45:17.140715   80210 cni.go:84] Creating CNI manager for "flannel"
	I1001 20:45:17.140744   80210 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 20:45:17.140781   80210 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.251 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-983557 NodeName:flannel-983557 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 20:45:17.140964   80210 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-983557"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 20:45:17.141035   80210 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 20:45:17.150852   80210 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 20:45:17.150928   80210 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 20:45:17.161527   80210 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1001 20:45:17.178760   80210 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 20:45:17.199005   80210 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1001 20:45:17.216978   80210 ssh_runner.go:195] Run: grep 192.168.39.251	control-plane.minikube.internal$ /etc/hosts
	I1001 20:45:17.221082   80210 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:45:17.233960   80210 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:45:17.377040   80210 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:45:17.398568   80210 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557 for IP: 192.168.39.251
	I1001 20:45:17.398594   80210 certs.go:194] generating shared ca certs ...
	I1001 20:45:17.398613   80210 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:45:17.398791   80210 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 20:45:17.398847   80210 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 20:45:17.398861   80210 certs.go:256] generating profile certs ...
	I1001 20:45:17.398932   80210 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/client.key
	I1001 20:45:17.398955   80210 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/client.crt with IP's: []
	I1001 20:45:17.904252   80210 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/client.crt ...
	I1001 20:45:17.904283   80210 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/client.crt: {Name:mk1be4ff874f0ded5ff83646539f031d78e47935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:45:17.904480   80210 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/client.key ...
	I1001 20:45:17.904494   80210 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/client.key: {Name:mk0bd3168947654432bd9bc096b996f13f2bb9e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:45:17.904579   80210 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/apiserver.key.5cd5a3a4
	I1001 20:45:17.904594   80210 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/apiserver.crt.5cd5a3a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.251]
	I1001 20:45:18.052284   80210 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/apiserver.crt.5cd5a3a4 ...
	I1001 20:45:18.052316   80210 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/apiserver.crt.5cd5a3a4: {Name:mk7885f43d9494619a65b66e5cee59c001a9491a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:45:18.052510   80210 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/apiserver.key.5cd5a3a4 ...
	I1001 20:45:18.052527   80210 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/apiserver.key.5cd5a3a4: {Name:mk0498b9edee1e0fb5accc447165cb9c9e008efe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:45:18.052623   80210 certs.go:381] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/apiserver.crt.5cd5a3a4 -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/apiserver.crt
	I1001 20:45:18.052717   80210 certs.go:385] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/apiserver.key.5cd5a3a4 -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/apiserver.key
	I1001 20:45:18.052776   80210 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/proxy-client.key
	I1001 20:45:18.052794   80210 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/proxy-client.crt with IP's: []
	I1001 20:45:18.454190   80210 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/proxy-client.crt ...
	I1001 20:45:18.454222   80210 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/proxy-client.crt: {Name:mkf5046522bcca182115e81c58c37f7ae21ad14c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:45:18.454385   80210 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/proxy-client.key ...
	I1001 20:45:18.454398   80210 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/proxy-client.key: {Name:mk5f1c7b5b46ac395ccaf3216fe76cc8bc0c1fbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:45:18.454561   80210 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 20:45:18.454603   80210 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 20:45:18.454617   80210 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 20:45:18.454650   80210 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 20:45:18.454676   80210 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 20:45:18.454699   80210 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 20:45:18.454757   80210 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:45:18.455321   80210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 20:45:18.482916   80210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 20:45:18.510312   80210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 20:45:18.538462   80210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 20:45:18.566397   80210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1001 20:45:18.593007   80210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 20:45:18.692523   80210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 20:45:18.717454   80210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/flannel-983557/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 20:45:18.744601   80210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 20:45:18.771207   80210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 20:45:18.797099   80210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 20:45:18.822096   80210 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 20:45:18.838939   80210 ssh_runner.go:195] Run: openssl version
	I1001 20:45:18.845059   80210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 20:45:18.857907   80210 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 20:45:18.862659   80210 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 20:45:18.862725   80210 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 20:45:18.868716   80210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 20:45:18.880298   80210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 20:45:18.894114   80210 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 20:45:18.898957   80210 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 20:45:18.899031   80210 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 20:45:18.904863   80210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 20:45:18.927268   80210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 20:45:18.940038   80210 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:45:18.949185   80210 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:45:18.949256   80210 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:45:18.955003   80210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 20:45:18.966167   80210 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 20:45:18.971271   80210 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 20:45:18.971333   80210 kubeadm.go:392] StartCluster: {Name:flannel-983557 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:flannel-983557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:45:18.971430   80210 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 20:45:18.971487   80210 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 20:45:19.016960   80210 cri.go:89] found id: ""
	I1001 20:45:19.017021   80210 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 20:45:19.027733   80210 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:45:19.037989   80210 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:45:19.048596   80210 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:45:19.048614   80210 kubeadm.go:157] found existing configuration files:
	
	I1001 20:45:19.048663   80210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:45:19.060379   80210 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:45:19.060448   80210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:45:19.074118   80210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:45:19.086042   80210 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:45:19.086119   80210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:45:19.098270   80210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:45:19.108407   80210 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:45:19.108481   80210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:45:19.119067   80210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:45:19.130959   80210 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:45:19.131027   80210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:45:19.142158   80210 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:45:19.193797   80210 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 20:45:19.193861   80210 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:45:19.298494   80210 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:45:19.298671   80210 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:45:19.298818   80210 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 20:45:19.308243   80210 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:45:15.485161   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:15.485741   81817 main.go:141] libmachine: (bridge-983557) DBG | unable to find current IP address of domain bridge-983557 in network mk-bridge-983557
	I1001 20:45:15.485790   81817 main.go:141] libmachine: (bridge-983557) DBG | I1001 20:45:15.485700   81882 retry.go:31] will retry after 869.234291ms: waiting for machine to come up
	I1001 20:45:16.356943   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:16.357631   81817 main.go:141] libmachine: (bridge-983557) DBG | unable to find current IP address of domain bridge-983557 in network mk-bridge-983557
	I1001 20:45:16.357663   81817 main.go:141] libmachine: (bridge-983557) DBG | I1001 20:45:16.357554   81882 retry.go:31] will retry after 873.740243ms: waiting for machine to come up
	I1001 20:45:17.233411   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:17.233992   81817 main.go:141] libmachine: (bridge-983557) DBG | unable to find current IP address of domain bridge-983557 in network mk-bridge-983557
	I1001 20:45:17.234020   81817 main.go:141] libmachine: (bridge-983557) DBG | I1001 20:45:17.233941   81882 retry.go:31] will retry after 1.134044347s: waiting for machine to come up
	I1001 20:45:18.370065   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:18.370668   81817 main.go:141] libmachine: (bridge-983557) DBG | unable to find current IP address of domain bridge-983557 in network mk-bridge-983557
	I1001 20:45:18.370697   81817 main.go:141] libmachine: (bridge-983557) DBG | I1001 20:45:18.370597   81882 retry.go:31] will retry after 1.420569949s: waiting for machine to come up
	I1001 20:45:19.792960   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:19.793531   81817 main.go:141] libmachine: (bridge-983557) DBG | unable to find current IP address of domain bridge-983557 in network mk-bridge-983557
	I1001 20:45:19.793554   81817 main.go:141] libmachine: (bridge-983557) DBG | I1001 20:45:19.793441   81882 retry.go:31] will retry after 2.289248166s: waiting for machine to come up
	I1001 20:45:19.474674   80210 out.go:235]   - Generating certificates and keys ...
	I1001 20:45:19.474829   80210 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:45:19.474966   80210 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:45:19.475089   80210 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 20:45:19.640664   80210 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 20:45:19.808055   80210 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 20:45:19.989567   80210 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 20:45:20.129971   80210 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 20:45:20.130212   80210 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-983557 localhost] and IPs [192.168.39.251 127.0.0.1 ::1]
	I1001 20:45:20.307585   80210 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 20:45:20.307814   80210 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-983557 localhost] and IPs [192.168.39.251 127.0.0.1 ::1]
	I1001 20:45:20.493688   80210 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 20:45:20.727763   80210 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 20:45:20.822550   80210 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 20:45:20.822866   80210 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:45:21.093635   80210 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:45:21.249471   80210 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 20:45:21.371542   80210 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:45:21.582704   80210 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:45:21.804966   80210 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:45:21.805730   80210 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:45:21.808882   80210 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:45:18.348573   78160 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgnln" in "kube-system" namespace has status "Ready":"False"
	I1001 20:45:20.845335   78160 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgnln" in "kube-system" namespace has status "Ready":"False"
	I1001 20:45:22.084955   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:22.085526   81817 main.go:141] libmachine: (bridge-983557) DBG | unable to find current IP address of domain bridge-983557 in network mk-bridge-983557
	I1001 20:45:22.085553   81817 main.go:141] libmachine: (bridge-983557) DBG | I1001 20:45:22.085468   81882 retry.go:31] will retry after 2.352592442s: waiting for machine to come up
	I1001 20:45:24.439311   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:24.439858   81817 main.go:141] libmachine: (bridge-983557) DBG | unable to find current IP address of domain bridge-983557 in network mk-bridge-983557
	I1001 20:45:24.439886   81817 main.go:141] libmachine: (bridge-983557) DBG | I1001 20:45:24.439812   81882 retry.go:31] will retry after 3.489313167s: waiting for machine to come up
	I1001 20:45:21.810685   80210 out.go:235]   - Booting up control plane ...
	I1001 20:45:21.810831   80210 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:45:21.810959   80210 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:45:21.811053   80210 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:45:21.831566   80210 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:45:21.838544   80210 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:45:21.838644   80210 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:45:21.986039   80210 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 20:45:21.986169   80210 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 20:45:22.988183   80210 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002398832s
	I1001 20:45:22.988291   80210 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 20:45:23.345848   78160 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgnln" in "kube-system" namespace has status "Ready":"False"
	I1001 20:45:25.844292   78160 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgnln" in "kube-system" namespace has status "Ready":"False"
	I1001 20:45:27.986577   80210 kubeadm.go:310] [api-check] The API server is healthy after 5.001172539s
	I1001 20:45:28.004541   80210 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 20:45:28.027202   80210 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 20:45:28.071026   80210 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 20:45:28.071199   80210 kubeadm.go:310] [mark-control-plane] Marking the node flannel-983557 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 20:45:28.092317   80210 kubeadm.go:310] [bootstrap-token] Using token: nbsykt.vh5yosybyaz2kkdf
	I1001 20:45:27.846419   78160 pod_ready.go:93] pod "coredns-7c65d6cfc9-fgnln" in "kube-system" namespace has status "Ready":"True"
	I1001 20:45:27.846442   78160 pod_ready.go:82] duration metric: took 41.508589866s for pod "coredns-7c65d6cfc9-fgnln" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:27.846451   78160 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-mdzhr" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:27.848841   78160 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-mdzhr" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-mdzhr" not found
	I1001 20:45:27.848871   78160 pod_ready.go:82] duration metric: took 2.413413ms for pod "coredns-7c65d6cfc9-mdzhr" in "kube-system" namespace to be "Ready" ...
	E1001 20:45:27.848883   78160 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-mdzhr" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-mdzhr" not found
	I1001 20:45:27.848891   78160 pod_ready.go:79] waiting up to 15m0s for pod "etcd-enable-default-cni-983557" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:27.855594   78160 pod_ready.go:93] pod "etcd-enable-default-cni-983557" in "kube-system" namespace has status "Ready":"True"
	I1001 20:45:27.855621   78160 pod_ready.go:82] duration metric: took 6.721295ms for pod "etcd-enable-default-cni-983557" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:27.855634   78160 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-983557" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:27.860839   78160 pod_ready.go:93] pod "kube-apiserver-enable-default-cni-983557" in "kube-system" namespace has status "Ready":"True"
	I1001 20:45:27.860858   78160 pod_ready.go:82] duration metric: took 5.217852ms for pod "kube-apiserver-enable-default-cni-983557" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:27.860868   78160 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-983557" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:27.866762   78160 pod_ready.go:93] pod "kube-controller-manager-enable-default-cni-983557" in "kube-system" namespace has status "Ready":"True"
	I1001 20:45:27.866786   78160 pod_ready.go:82] duration metric: took 5.911042ms for pod "kube-controller-manager-enable-default-cni-983557" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:27.866799   78160 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-57x6g" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:28.043875   78160 pod_ready.go:93] pod "kube-proxy-57x6g" in "kube-system" namespace has status "Ready":"True"
	I1001 20:45:28.043903   78160 pod_ready.go:82] duration metric: took 177.095617ms for pod "kube-proxy-57x6g" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:28.043915   78160 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-983557" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:28.442127   78160 pod_ready.go:93] pod "kube-scheduler-enable-default-cni-983557" in "kube-system" namespace has status "Ready":"True"
	I1001 20:45:28.442151   78160 pod_ready.go:82] duration metric: took 398.22724ms for pod "kube-scheduler-enable-default-cni-983557" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:28.442160   78160 pod_ready.go:39] duration metric: took 42.151017625s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:45:28.442180   78160 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:45:28.442227   78160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:45:28.468692   78160 api_server.go:72] duration metric: took 43.291293146s to wait for apiserver process to appear ...
	I1001 20:45:28.468723   78160 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:45:28.468745   78160 api_server.go:253] Checking apiserver healthz at https://192.168.61.17:8443/healthz ...
	I1001 20:45:28.474021   78160 api_server.go:279] https://192.168.61.17:8443/healthz returned 200:
	ok
	I1001 20:45:28.475289   78160 api_server.go:141] control plane version: v1.31.1
	I1001 20:45:28.475310   78160 api_server.go:131] duration metric: took 6.580065ms to wait for apiserver health ...
	I1001 20:45:28.475318   78160 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:45:28.646766   78160 system_pods.go:59] 7 kube-system pods found
	I1001 20:45:28.646797   78160 system_pods.go:61] "coredns-7c65d6cfc9-fgnln" [6f35a0fb-26ef-4b39-ac37-45bc4becfe9c] Running
	I1001 20:45:28.646803   78160 system_pods.go:61] "etcd-enable-default-cni-983557" [807f90f2-b247-4404-a65d-be17e0813db5] Running
	I1001 20:45:28.646812   78160 system_pods.go:61] "kube-apiserver-enable-default-cni-983557" [cfd18268-0e8e-46d5-8034-d8203a0a0de6] Running
	I1001 20:45:28.646816   78160 system_pods.go:61] "kube-controller-manager-enable-default-cni-983557" [f31ba78c-1fea-48cc-9287-f921f66c64c6] Running
	I1001 20:45:28.646819   78160 system_pods.go:61] "kube-proxy-57x6g" [acf3429e-cf7c-45f2-9eb1-05c583d17d69] Running
	I1001 20:45:28.646823   78160 system_pods.go:61] "kube-scheduler-enable-default-cni-983557" [ecd0b49f-3839-4feb-a402-354762d568e0] Running
	I1001 20:45:28.646825   78160 system_pods.go:61] "storage-provisioner" [0482e96f-c9aa-4272-ba71-64e165b6b4b8] Running
	I1001 20:45:28.646831   78160 system_pods.go:74] duration metric: took 171.507916ms to wait for pod list to return data ...
	I1001 20:45:28.646837   78160 default_sa.go:34] waiting for default service account to be created ...
	I1001 20:45:28.843203   78160 default_sa.go:45] found service account: "default"
	I1001 20:45:28.843238   78160 default_sa.go:55] duration metric: took 196.393689ms for default service account to be created ...
	I1001 20:45:28.843251   78160 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 20:45:29.048894   78160 system_pods.go:86] 7 kube-system pods found
	I1001 20:45:29.048926   78160 system_pods.go:89] "coredns-7c65d6cfc9-fgnln" [6f35a0fb-26ef-4b39-ac37-45bc4becfe9c] Running
	I1001 20:45:29.048934   78160 system_pods.go:89] "etcd-enable-default-cni-983557" [807f90f2-b247-4404-a65d-be17e0813db5] Running
	I1001 20:45:29.048940   78160 system_pods.go:89] "kube-apiserver-enable-default-cni-983557" [cfd18268-0e8e-46d5-8034-d8203a0a0de6] Running
	I1001 20:45:29.048946   78160 system_pods.go:89] "kube-controller-manager-enable-default-cni-983557" [f31ba78c-1fea-48cc-9287-f921f66c64c6] Running
	I1001 20:45:29.048950   78160 system_pods.go:89] "kube-proxy-57x6g" [acf3429e-cf7c-45f2-9eb1-05c583d17d69] Running
	I1001 20:45:29.048955   78160 system_pods.go:89] "kube-scheduler-enable-default-cni-983557" [ecd0b49f-3839-4feb-a402-354762d568e0] Running
	I1001 20:45:29.048960   78160 system_pods.go:89] "storage-provisioner" [0482e96f-c9aa-4272-ba71-64e165b6b4b8] Running
	I1001 20:45:29.048968   78160 system_pods.go:126] duration metric: took 205.710713ms to wait for k8s-apps to be running ...
	I1001 20:45:29.048978   78160 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 20:45:29.049028   78160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:45:29.065311   78160 system_svc.go:56] duration metric: took 16.32522ms WaitForService to wait for kubelet
	I1001 20:45:29.065344   78160 kubeadm.go:582] duration metric: took 43.887952026s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:45:29.065362   78160 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:45:29.243453   78160 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:45:29.243483   78160 node_conditions.go:123] node cpu capacity is 2
	I1001 20:45:29.243496   78160 node_conditions.go:105] duration metric: took 178.129046ms to run NodePressure ...
	I1001 20:45:29.243507   78160 start.go:241] waiting for startup goroutines ...
	I1001 20:45:29.243514   78160 start.go:246] waiting for cluster config update ...
	I1001 20:45:29.243523   78160 start.go:255] writing updated cluster config ...
	I1001 20:45:29.243886   78160 ssh_runner.go:195] Run: rm -f paused
	I1001 20:45:29.295082   78160 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 20:45:29.296794   78160 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-983557" cluster and "default" namespace by default
	W1001 20:45:29.306005   78160 root.go:91] failed to log command end to audit: failed to find a log row with id equals to b52b6ea8-48ce-475c-b47b-f6c853af9014
	I1001 20:45:28.093802   80210 out.go:235]   - Configuring RBAC rules ...
	I1001 20:45:28.093987   80210 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 20:45:28.101918   80210 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 20:45:28.117964   80210 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 20:45:28.124067   80210 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 20:45:28.132747   80210 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 20:45:28.140892   80210 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 20:45:28.399556   80210 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 20:45:28.874781   80210 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 20:45:29.397905   80210 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 20:45:29.398846   80210 kubeadm.go:310] 
	I1001 20:45:29.398938   80210 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 20:45:29.398951   80210 kubeadm.go:310] 
	I1001 20:45:29.399056   80210 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 20:45:29.399066   80210 kubeadm.go:310] 
	I1001 20:45:29.399098   80210 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 20:45:29.399731   80210 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 20:45:29.399784   80210 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 20:45:29.399791   80210 kubeadm.go:310] 
	I1001 20:45:29.399888   80210 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 20:45:29.399907   80210 kubeadm.go:310] 
	I1001 20:45:29.399973   80210 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 20:45:29.399986   80210 kubeadm.go:310] 
	I1001 20:45:29.400060   80210 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 20:45:29.400164   80210 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 20:45:29.400259   80210 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 20:45:29.400270   80210 kubeadm.go:310] 
	I1001 20:45:29.400417   80210 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 20:45:29.400547   80210 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 20:45:29.400563   80210 kubeadm.go:310] 
	I1001 20:45:29.400683   80210 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nbsykt.vh5yosybyaz2kkdf \
	I1001 20:45:29.400799   80210 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 \
	I1001 20:45:29.400834   80210 kubeadm.go:310] 	--control-plane 
	I1001 20:45:29.400888   80210 kubeadm.go:310] 
	I1001 20:45:29.401010   80210 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 20:45:29.401022   80210 kubeadm.go:310] 
	I1001 20:45:29.401097   80210 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nbsykt.vh5yosybyaz2kkdf \
	I1001 20:45:29.401254   80210 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 
	I1001 20:45:29.402346   80210 kubeadm.go:310] W1001 20:45:19.176539     841 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:45:29.402679   80210 kubeadm.go:310] W1001 20:45:19.177353     841 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:45:29.402824   80210 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:45:29.402852   80210 cni.go:84] Creating CNI manager for "flannel"
	I1001 20:45:29.404472   80210 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I1001 20:45:27.930700   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:27.931187   81817 main.go:141] libmachine: (bridge-983557) DBG | unable to find current IP address of domain bridge-983557 in network mk-bridge-983557
	I1001 20:45:27.931204   81817 main.go:141] libmachine: (bridge-983557) DBG | I1001 20:45:27.931131   81882 retry.go:31] will retry after 3.236229975s: waiting for machine to come up
	I1001 20:45:29.405592   80210 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1001 20:45:29.410800   80210 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1001 20:45:29.410815   80210 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I1001 20:45:29.434246   80210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1001 20:45:29.972629   80210 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 20:45:29.972738   80210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:45:29.972743   80210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-983557 minikube.k8s.io/updated_at=2024_10_01T20_45_29_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=flannel-983557 minikube.k8s.io/primary=true
	I1001 20:45:30.011109   80210 ops.go:34] apiserver oom_adj: -16
	I1001 20:45:30.148250   80210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:45:30.648505   80210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:45:31.148529   80210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:45:31.648901   80210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:45:32.148813   80210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:45:32.649165   80210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:45:33.148340   80210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:45:33.649266   80210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:45:34.148531   80210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:45:34.257440   80210 kubeadm.go:1113] duration metric: took 4.28477524s to wait for elevateKubeSystemPrivileges
	I1001 20:45:34.257472   80210 kubeadm.go:394] duration metric: took 15.286144238s to StartCluster
	I1001 20:45:34.257489   80210 settings.go:142] acquiring lock: {Name:mkeb3fe64ed992373f048aef24eb0d675bcab60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:45:34.257557   80210 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:45:34.258700   80210 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/kubeconfig: {Name:mk7ef19d546928001abf2478a3abe1d17765a591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:45:34.258946   80210 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 20:45:34.258973   80210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 20:45:34.259011   80210 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 20:45:34.259134   80210 addons.go:69] Setting storage-provisioner=true in profile "flannel-983557"
	I1001 20:45:34.259139   80210 addons.go:69] Setting default-storageclass=true in profile "flannel-983557"
	I1001 20:45:34.259154   80210 addons.go:234] Setting addon storage-provisioner=true in "flannel-983557"
	I1001 20:45:34.259158   80210 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-983557"
	I1001 20:45:34.259184   80210 config.go:182] Loaded profile config "flannel-983557": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:45:34.259195   80210 host.go:66] Checking if "flannel-983557" exists ...
	I1001 20:45:34.259534   80210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:45:34.259564   80210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:45:34.259621   80210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:45:34.259644   80210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:45:34.260322   80210 out.go:177] * Verifying Kubernetes components...
	I1001 20:45:34.261371   80210 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:45:34.275314   80210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46239
	I1001 20:45:34.275327   80210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45553
	I1001 20:45:34.275760   80210 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:45:34.275872   80210 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:45:34.276269   80210 main.go:141] libmachine: Using API Version  1
	I1001 20:45:34.276289   80210 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:45:34.276444   80210 main.go:141] libmachine: Using API Version  1
	I1001 20:45:34.276469   80210 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:45:34.276640   80210 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:45:34.276789   80210 main.go:141] libmachine: (flannel-983557) Calling .GetState
	I1001 20:45:34.276801   80210 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:45:34.277223   80210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:45:34.277249   80210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:45:34.280133   80210 addons.go:234] Setting addon default-storageclass=true in "flannel-983557"
	I1001 20:45:34.280176   80210 host.go:66] Checking if "flannel-983557" exists ...
	I1001 20:45:34.280491   80210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:45:34.280522   80210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:45:34.293808   80210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43505
	I1001 20:45:34.294274   80210 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:45:34.294832   80210 main.go:141] libmachine: Using API Version  1
	I1001 20:45:34.294858   80210 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:45:34.295178   80210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I1001 20:45:34.295233   80210 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:45:34.295424   80210 main.go:141] libmachine: (flannel-983557) Calling .GetState
	I1001 20:45:34.295645   80210 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:45:34.296093   80210 main.go:141] libmachine: Using API Version  1
	I1001 20:45:34.296115   80210 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:45:34.296459   80210 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:45:34.297010   80210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:45:34.297040   80210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:45:34.297239   80210 main.go:141] libmachine: (flannel-983557) Calling .DriverName
	I1001 20:45:34.299041   80210 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:45:31.170035   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:31.170647   81817 main.go:141] libmachine: (bridge-983557) DBG | unable to find current IP address of domain bridge-983557 in network mk-bridge-983557
	I1001 20:45:31.170676   81817 main.go:141] libmachine: (bridge-983557) DBG | I1001 20:45:31.170600   81882 retry.go:31] will retry after 4.77424278s: waiting for machine to come up
	I1001 20:45:34.300490   80210 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:45:34.300510   80210 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 20:45:34.300531   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHHostname
	I1001 20:45:34.303894   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:34.304308   80210 main.go:141] libmachine: (flannel-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:dd:e9", ip: ""} in network mk-flannel-983557: {Iface:virbr1 ExpiryTime:2024-10-01 21:45:00 +0000 UTC Type:0 Mac:52:54:00:33:dd:e9 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:flannel-983557 Clientid:01:52:54:00:33:dd:e9}
	I1001 20:45:34.304333   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined IP address 192.168.39.251 and MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:34.304520   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHPort
	I1001 20:45:34.304707   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHKeyPath
	I1001 20:45:34.304835   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHUsername
	I1001 20:45:34.305012   80210 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/flannel-983557/id_rsa Username:docker}
	I1001 20:45:34.313870   80210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44559
	I1001 20:45:34.314316   80210 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:45:34.314817   80210 main.go:141] libmachine: Using API Version  1
	I1001 20:45:34.314842   80210 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:45:34.315201   80210 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:45:34.315402   80210 main.go:141] libmachine: (flannel-983557) Calling .GetState
	I1001 20:45:34.317100   80210 main.go:141] libmachine: (flannel-983557) Calling .DriverName
	I1001 20:45:34.317353   80210 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 20:45:34.317372   80210 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 20:45:34.317392   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHHostname
	I1001 20:45:34.320206   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:34.320659   80210 main.go:141] libmachine: (flannel-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:dd:e9", ip: ""} in network mk-flannel-983557: {Iface:virbr1 ExpiryTime:2024-10-01 21:45:00 +0000 UTC Type:0 Mac:52:54:00:33:dd:e9 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:flannel-983557 Clientid:01:52:54:00:33:dd:e9}
	I1001 20:45:34.320687   80210 main.go:141] libmachine: (flannel-983557) DBG | domain flannel-983557 has defined IP address 192.168.39.251 and MAC address 52:54:00:33:dd:e9 in network mk-flannel-983557
	I1001 20:45:34.320827   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHPort
	I1001 20:45:34.320995   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHKeyPath
	I1001 20:45:34.321129   80210 main.go:141] libmachine: (flannel-983557) Calling .GetSSHUsername
	I1001 20:45:34.321234   80210 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/flannel-983557/id_rsa Username:docker}
	I1001 20:45:34.508827   80210 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:45:34.508908   80210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 20:45:34.625228   80210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 20:45:34.715538   80210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:45:35.039373   80210 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1001 20:45:35.040297   80210 node_ready.go:35] waiting up to 15m0s for node "flannel-983557" to be "Ready" ...
	I1001 20:45:35.044476   80210 main.go:141] libmachine: Making call to close driver server
	I1001 20:45:35.044502   80210 main.go:141] libmachine: (flannel-983557) Calling .Close
	I1001 20:45:35.044892   80210 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:45:35.044909   80210 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:45:35.044919   80210 main.go:141] libmachine: Making call to close driver server
	I1001 20:45:35.044927   80210 main.go:141] libmachine: (flannel-983557) Calling .Close
	I1001 20:45:35.044895   80210 main.go:141] libmachine: (flannel-983557) DBG | Closing plugin on server side
	I1001 20:45:35.045137   80210 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:45:35.045150   80210 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:45:35.045170   80210 main.go:141] libmachine: (flannel-983557) DBG | Closing plugin on server side
	I1001 20:45:35.066255   80210 main.go:141] libmachine: Making call to close driver server
	I1001 20:45:35.066287   80210 main.go:141] libmachine: (flannel-983557) Calling .Close
	I1001 20:45:35.066645   80210 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:45:35.066665   80210 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:45:35.345269   80210 main.go:141] libmachine: Making call to close driver server
	I1001 20:45:35.345302   80210 main.go:141] libmachine: (flannel-983557) Calling .Close
	I1001 20:45:35.345586   80210 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:45:35.345601   80210 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:45:35.345609   80210 main.go:141] libmachine: Making call to close driver server
	I1001 20:45:35.345616   80210 main.go:141] libmachine: (flannel-983557) Calling .Close
	I1001 20:45:35.345620   80210 main.go:141] libmachine: (flannel-983557) DBG | Closing plugin on server side
	I1001 20:45:35.345831   80210 main.go:141] libmachine: (flannel-983557) DBG | Closing plugin on server side
	I1001 20:45:35.345865   80210 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:45:35.345876   80210 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:45:35.347245   80210 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1001 20:45:35.348268   80210 addons.go:510] duration metric: took 1.089259801s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1001 20:45:35.946907   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:35.947426   81817 main.go:141] libmachine: (bridge-983557) Found IP for machine: 192.168.72.18
	I1001 20:45:35.947451   81817 main.go:141] libmachine: (bridge-983557) Reserving static IP address...
	I1001 20:45:35.947460   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has current primary IP address 192.168.72.18 and MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:35.947744   81817 main.go:141] libmachine: (bridge-983557) DBG | unable to find host DHCP lease matching {name: "bridge-983557", mac: "52:54:00:f5:66:cb", ip: "192.168.72.18"} in network mk-bridge-983557
	I1001 20:45:36.031542   81817 main.go:141] libmachine: (bridge-983557) DBG | Getting to WaitForSSH function...
	I1001 20:45:36.031569   81817 main.go:141] libmachine: (bridge-983557) Reserved static IP address: 192.168.72.18
	I1001 20:45:36.031584   81817 main.go:141] libmachine: (bridge-983557) Waiting for SSH to be available...
	I1001 20:45:36.034709   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:36.035196   81817 main.go:141] libmachine: (bridge-983557) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f5:66:cb", ip: ""} in network mk-bridge-983557
	I1001 20:45:36.035229   81817 main.go:141] libmachine: (bridge-983557) DBG | unable to find defined IP address of network mk-bridge-983557 interface with MAC address 52:54:00:f5:66:cb
	I1001 20:45:36.035386   81817 main.go:141] libmachine: (bridge-983557) DBG | Using SSH client type: external
	I1001 20:45:36.035416   81817 main.go:141] libmachine: (bridge-983557) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/bridge-983557/id_rsa (-rw-------)
	I1001 20:45:36.035456   81817 main.go:141] libmachine: (bridge-983557) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/bridge-983557/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 20:45:36.035469   81817 main.go:141] libmachine: (bridge-983557) DBG | About to run SSH command:
	I1001 20:45:36.035487   81817 main.go:141] libmachine: (bridge-983557) DBG | exit 0
	I1001 20:45:36.040105   81817 main.go:141] libmachine: (bridge-983557) DBG | SSH cmd err, output: exit status 255: 
	I1001 20:45:36.040134   81817 main.go:141] libmachine: (bridge-983557) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1001 20:45:36.040144   81817 main.go:141] libmachine: (bridge-983557) DBG | command : exit 0
	I1001 20:45:36.040151   81817 main.go:141] libmachine: (bridge-983557) DBG | err     : exit status 255
	I1001 20:45:36.040162   81817 main.go:141] libmachine: (bridge-983557) DBG | output  : 
	I1001 20:45:39.040303   81817 main.go:141] libmachine: (bridge-983557) DBG | Getting to WaitForSSH function...
	I1001 20:45:39.042804   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:39.043190   81817 main.go:141] libmachine: (bridge-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:66:cb", ip: ""} in network mk-bridge-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:45:27 +0000 UTC Type:0 Mac:52:54:00:f5:66:cb Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:bridge-983557 Clientid:01:52:54:00:f5:66:cb}
	I1001 20:45:39.043224   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined IP address 192.168.72.18 and MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:39.043348   81817 main.go:141] libmachine: (bridge-983557) DBG | Using SSH client type: external
	I1001 20:45:39.043371   81817 main.go:141] libmachine: (bridge-983557) DBG | Using SSH private key: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/bridge-983557/id_rsa (-rw-------)
	I1001 20:45:39.043416   81817 main.go:141] libmachine: (bridge-983557) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.18 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19736-11198/.minikube/machines/bridge-983557/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 20:45:39.043431   81817 main.go:141] libmachine: (bridge-983557) DBG | About to run SSH command:
	I1001 20:45:39.043463   81817 main.go:141] libmachine: (bridge-983557) DBG | exit 0
	I1001 20:45:39.168442   81817 main.go:141] libmachine: (bridge-983557) DBG | SSH cmd err, output: <nil>: 
	I1001 20:45:39.168668   81817 main.go:141] libmachine: (bridge-983557) KVM machine creation complete!
	I1001 20:45:39.169017   81817 main.go:141] libmachine: (bridge-983557) Calling .GetConfigRaw
	I1001 20:45:39.169539   81817 main.go:141] libmachine: (bridge-983557) Calling .DriverName
	I1001 20:45:39.169766   81817 main.go:141] libmachine: (bridge-983557) Calling .DriverName
	I1001 20:45:39.169896   81817 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 20:45:39.169911   81817 main.go:141] libmachine: (bridge-983557) Calling .GetState
	I1001 20:45:39.171282   81817 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 20:45:39.171296   81817 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 20:45:39.171301   81817 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 20:45:39.171308   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHHostname
	I1001 20:45:39.173818   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:39.174310   81817 main.go:141] libmachine: (bridge-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:66:cb", ip: ""} in network mk-bridge-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:45:27 +0000 UTC Type:0 Mac:52:54:00:f5:66:cb Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:bridge-983557 Clientid:01:52:54:00:f5:66:cb}
	I1001 20:45:39.174390   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined IP address 192.168.72.18 and MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:39.174614   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHPort
	I1001 20:45:39.174803   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHKeyPath
	I1001 20:45:39.174959   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHKeyPath
	I1001 20:45:39.175084   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHUsername
	I1001 20:45:39.175235   81817 main.go:141] libmachine: Using SSH client type: native
	I1001 20:45:39.175417   81817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1001 20:45:39.175435   81817 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 20:45:39.275874   81817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:45:39.275900   81817 main.go:141] libmachine: Detecting the provisioner...
	I1001 20:45:39.275911   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHHostname
	I1001 20:45:39.278793   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:39.279119   81817 main.go:141] libmachine: (bridge-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:66:cb", ip: ""} in network mk-bridge-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:45:27 +0000 UTC Type:0 Mac:52:54:00:f5:66:cb Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:bridge-983557 Clientid:01:52:54:00:f5:66:cb}
	I1001 20:45:39.279149   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined IP address 192.168.72.18 and MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:39.279289   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHPort
	I1001 20:45:39.279490   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHKeyPath
	I1001 20:45:39.279621   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHKeyPath
	I1001 20:45:39.279793   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHUsername
	I1001 20:45:39.279956   81817 main.go:141] libmachine: Using SSH client type: native
	I1001 20:45:39.280160   81817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1001 20:45:39.280172   81817 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 20:45:39.381518   81817 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 20:45:39.381622   81817 main.go:141] libmachine: found compatible host: buildroot
	I1001 20:45:39.381637   81817 main.go:141] libmachine: Provisioning with buildroot...
	I1001 20:45:39.381651   81817 main.go:141] libmachine: (bridge-983557) Calling .GetMachineName
	I1001 20:45:39.381905   81817 buildroot.go:166] provisioning hostname "bridge-983557"
	I1001 20:45:39.381931   81817 main.go:141] libmachine: (bridge-983557) Calling .GetMachineName
	I1001 20:45:39.382103   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHHostname
	I1001 20:45:39.384856   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:39.385241   81817 main.go:141] libmachine: (bridge-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:66:cb", ip: ""} in network mk-bridge-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:45:27 +0000 UTC Type:0 Mac:52:54:00:f5:66:cb Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:bridge-983557 Clientid:01:52:54:00:f5:66:cb}
	I1001 20:45:39.385266   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined IP address 192.168.72.18 and MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:39.385435   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHPort
	I1001 20:45:39.385679   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHKeyPath
	I1001 20:45:39.385842   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHKeyPath
	I1001 20:45:39.385972   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHUsername
	I1001 20:45:39.386149   81817 main.go:141] libmachine: Using SSH client type: native
	I1001 20:45:39.386320   81817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1001 20:45:39.386333   81817 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-983557 && echo "bridge-983557" | sudo tee /etc/hostname
	I1001 20:45:39.497660   81817 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-983557
	
	I1001 20:45:39.497697   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHHostname
	I1001 20:45:39.500734   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:39.501294   81817 main.go:141] libmachine: (bridge-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:66:cb", ip: ""} in network mk-bridge-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:45:27 +0000 UTC Type:0 Mac:52:54:00:f5:66:cb Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:bridge-983557 Clientid:01:52:54:00:f5:66:cb}
	I1001 20:45:39.501321   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined IP address 192.168.72.18 and MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:39.501514   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHPort
	I1001 20:45:39.501700   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHKeyPath
	I1001 20:45:39.501844   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHKeyPath
	I1001 20:45:39.501991   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHUsername
	I1001 20:45:39.502124   81817 main.go:141] libmachine: Using SSH client type: native
	I1001 20:45:39.502286   81817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1001 20:45:39.502302   81817 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-983557' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-983557/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-983557' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 20:45:39.608926   81817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:45:39.608962   81817 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19736-11198/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-11198/.minikube}
	I1001 20:45:39.609172   81817 buildroot.go:174] setting up certificates
	I1001 20:45:39.609197   81817 provision.go:84] configureAuth start
	I1001 20:45:39.609215   81817 main.go:141] libmachine: (bridge-983557) Calling .GetMachineName
	I1001 20:45:39.609556   81817 main.go:141] libmachine: (bridge-983557) Calling .GetIP
	I1001 20:45:39.612782   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:39.613111   81817 main.go:141] libmachine: (bridge-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:66:cb", ip: ""} in network mk-bridge-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:45:27 +0000 UTC Type:0 Mac:52:54:00:f5:66:cb Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:bridge-983557 Clientid:01:52:54:00:f5:66:cb}
	I1001 20:45:39.613138   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined IP address 192.168.72.18 and MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:39.613346   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHHostname
	I1001 20:45:39.616150   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:39.616561   81817 main.go:141] libmachine: (bridge-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:66:cb", ip: ""} in network mk-bridge-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:45:27 +0000 UTC Type:0 Mac:52:54:00:f5:66:cb Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:bridge-983557 Clientid:01:52:54:00:f5:66:cb}
	I1001 20:45:39.616587   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined IP address 192.168.72.18 and MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:39.616814   81817 provision.go:143] copyHostCerts
	I1001 20:45:39.616911   81817 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem, removing ...
	I1001 20:45:39.616926   81817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem
	I1001 20:45:39.616983   81817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/ca.pem (1082 bytes)
	I1001 20:45:39.617243   81817 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem, removing ...
	I1001 20:45:39.617258   81817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem
	I1001 20:45:39.617295   81817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/cert.pem (1123 bytes)
	I1001 20:45:39.617372   81817 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem, removing ...
	I1001 20:45:39.617380   81817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem
	I1001 20:45:39.617399   81817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-11198/.minikube/key.pem (1679 bytes)
	I1001 20:45:39.617482   81817 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem org=jenkins.bridge-983557 san=[127.0.0.1 192.168.72.18 bridge-983557 localhost minikube]
	I1001 20:45:40.039140   81817 provision.go:177] copyRemoteCerts
	I1001 20:45:40.039199   81817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 20:45:40.039225   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHHostname
	I1001 20:45:40.042595   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:40.043027   81817 main.go:141] libmachine: (bridge-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:66:cb", ip: ""} in network mk-bridge-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:45:27 +0000 UTC Type:0 Mac:52:54:00:f5:66:cb Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:bridge-983557 Clientid:01:52:54:00:f5:66:cb}
	I1001 20:45:40.043060   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined IP address 192.168.72.18 and MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:40.043327   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHPort
	I1001 20:45:40.043588   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHKeyPath
	I1001 20:45:40.043794   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHUsername
	I1001 20:45:40.043949   81817 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/bridge-983557/id_rsa Username:docker}
	I1001 20:45:35.544409   80210 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-983557" context rescaled to 1 replicas
	I1001 20:45:37.043501   80210 node_ready.go:53] node "flannel-983557" has status "Ready":"False"
	I1001 20:45:39.046337   80210 node_ready.go:53] node "flannel-983557" has status "Ready":"False"
	I1001 20:45:40.122728   81817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 20:45:40.151474   81817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 20:45:40.178904   81817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 20:45:40.208155   81817 provision.go:87] duration metric: took 598.935928ms to configureAuth
	I1001 20:45:40.208192   81817 buildroot.go:189] setting minikube options for container-runtime
	I1001 20:45:40.208413   81817 config.go:182] Loaded profile config "bridge-983557": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:45:40.208524   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHHostname
	I1001 20:45:40.211906   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:40.212314   81817 main.go:141] libmachine: (bridge-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:66:cb", ip: ""} in network mk-bridge-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:45:27 +0000 UTC Type:0 Mac:52:54:00:f5:66:cb Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:bridge-983557 Clientid:01:52:54:00:f5:66:cb}
	I1001 20:45:40.212349   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined IP address 192.168.72.18 and MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:40.212610   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHPort
	I1001 20:45:40.212809   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHKeyPath
	I1001 20:45:40.212988   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHKeyPath
	I1001 20:45:40.213169   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHUsername
	I1001 20:45:40.213455   81817 main.go:141] libmachine: Using SSH client type: native
	I1001 20:45:40.213772   81817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1001 20:45:40.213792   81817 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 20:45:40.464136   81817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 20:45:40.464191   81817 main.go:141] libmachine: Checking connection to Docker...
	I1001 20:45:40.464208   81817 main.go:141] libmachine: (bridge-983557) Calling .GetURL
	I1001 20:45:40.466060   81817 main.go:141] libmachine: (bridge-983557) DBG | Using libvirt version 6000000
	I1001 20:45:40.468977   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:40.469245   81817 main.go:141] libmachine: (bridge-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:66:cb", ip: ""} in network mk-bridge-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:45:27 +0000 UTC Type:0 Mac:52:54:00:f5:66:cb Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:bridge-983557 Clientid:01:52:54:00:f5:66:cb}
	I1001 20:45:40.469270   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined IP address 192.168.72.18 and MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:40.469568   81817 main.go:141] libmachine: Docker is up and running!
	I1001 20:45:40.469584   81817 main.go:141] libmachine: Reticulating splines...
	I1001 20:45:40.469590   81817 client.go:171] duration metric: took 28.98127874s to LocalClient.Create
	I1001 20:45:40.469613   81817 start.go:167] duration metric: took 28.981349895s to libmachine.API.Create "bridge-983557"
	I1001 20:45:40.469623   81817 start.go:293] postStartSetup for "bridge-983557" (driver="kvm2")
	I1001 20:45:40.469632   81817 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 20:45:40.469651   81817 main.go:141] libmachine: (bridge-983557) Calling .DriverName
	I1001 20:45:40.469918   81817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 20:45:40.469958   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHHostname
	I1001 20:45:40.473154   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:40.473641   81817 main.go:141] libmachine: (bridge-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:66:cb", ip: ""} in network mk-bridge-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:45:27 +0000 UTC Type:0 Mac:52:54:00:f5:66:cb Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:bridge-983557 Clientid:01:52:54:00:f5:66:cb}
	I1001 20:45:40.473664   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined IP address 192.168.72.18 and MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:40.473903   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHPort
	I1001 20:45:40.474136   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHKeyPath
	I1001 20:45:40.474324   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHUsername
	I1001 20:45:40.474502   81817 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/bridge-983557/id_rsa Username:docker}
	I1001 20:45:40.560289   81817 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 20:45:40.565456   81817 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 20:45:40.565483   81817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/addons for local assets ...
	I1001 20:45:40.565544   81817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-11198/.minikube/files for local assets ...
	I1001 20:45:40.565650   81817 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem -> 184302.pem in /etc/ssl/certs
	I1001 20:45:40.565777   81817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 20:45:40.575885   81817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:45:40.604849   81817 start.go:296] duration metric: took 135.213132ms for postStartSetup
	I1001 20:45:40.604899   81817 main.go:141] libmachine: (bridge-983557) Calling .GetConfigRaw
	I1001 20:45:40.605623   81817 main.go:141] libmachine: (bridge-983557) Calling .GetIP
	I1001 20:45:40.609089   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:40.609485   81817 main.go:141] libmachine: (bridge-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:66:cb", ip: ""} in network mk-bridge-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:45:27 +0000 UTC Type:0 Mac:52:54:00:f5:66:cb Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:bridge-983557 Clientid:01:52:54:00:f5:66:cb}
	I1001 20:45:40.609526   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined IP address 192.168.72.18 and MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:40.609986   81817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/config.json ...
	I1001 20:45:40.610255   81817 start.go:128] duration metric: took 29.144335897s to createHost
	I1001 20:45:40.610286   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHHostname
	I1001 20:45:40.613309   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:40.613809   81817 main.go:141] libmachine: (bridge-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:66:cb", ip: ""} in network mk-bridge-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:45:27 +0000 UTC Type:0 Mac:52:54:00:f5:66:cb Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:bridge-983557 Clientid:01:52:54:00:f5:66:cb}
	I1001 20:45:40.613852   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined IP address 192.168.72.18 and MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:40.614049   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHPort
	I1001 20:45:40.614263   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHKeyPath
	I1001 20:45:40.614412   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHKeyPath
	I1001 20:45:40.614601   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHUsername
	I1001 20:45:40.614785   81817 main.go:141] libmachine: Using SSH client type: native
	I1001 20:45:40.614988   81817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1001 20:45:40.615001   81817 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 20:45:40.730080   81817 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727815540.708655463
	
	I1001 20:45:40.730155   81817 fix.go:216] guest clock: 1727815540.708655463
	I1001 20:45:40.730169   81817 fix.go:229] Guest: 2024-10-01 20:45:40.708655463 +0000 UTC Remote: 2024-10-01 20:45:40.610270363 +0000 UTC m=+35.581286208 (delta=98.3851ms)
	I1001 20:45:40.730196   81817 fix.go:200] guest clock delta is within tolerance: 98.3851ms
	I1001 20:45:40.730203   81817 start.go:83] releasing machines lock for "bridge-983557", held for 29.26448071s
	I1001 20:45:40.730230   81817 main.go:141] libmachine: (bridge-983557) Calling .DriverName
	I1001 20:45:40.730544   81817 main.go:141] libmachine: (bridge-983557) Calling .GetIP
	I1001 20:45:40.734207   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:40.734859   81817 main.go:141] libmachine: (bridge-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:66:cb", ip: ""} in network mk-bridge-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:45:27 +0000 UTC Type:0 Mac:52:54:00:f5:66:cb Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:bridge-983557 Clientid:01:52:54:00:f5:66:cb}
	I1001 20:45:40.734892   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined IP address 192.168.72.18 and MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:40.735233   81817 main.go:141] libmachine: (bridge-983557) Calling .DriverName
	I1001 20:45:40.735823   81817 main.go:141] libmachine: (bridge-983557) Calling .DriverName
	I1001 20:45:40.736245   81817 main.go:141] libmachine: (bridge-983557) Calling .DriverName
	I1001 20:45:40.736398   81817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 20:45:40.736471   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHHostname
	I1001 20:45:40.736676   81817 ssh_runner.go:195] Run: cat /version.json
	I1001 20:45:40.736718   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHHostname
	I1001 20:45:40.740177   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:40.740565   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:40.740645   81817 main.go:141] libmachine: (bridge-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:66:cb", ip: ""} in network mk-bridge-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:45:27 +0000 UTC Type:0 Mac:52:54:00:f5:66:cb Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:bridge-983557 Clientid:01:52:54:00:f5:66:cb}
	I1001 20:45:40.740666   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined IP address 192.168.72.18 and MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:40.740907   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHPort
	I1001 20:45:40.741123   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHKeyPath
	I1001 20:45:40.741131   81817 main.go:141] libmachine: (bridge-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:66:cb", ip: ""} in network mk-bridge-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:45:27 +0000 UTC Type:0 Mac:52:54:00:f5:66:cb Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:bridge-983557 Clientid:01:52:54:00:f5:66:cb}
	I1001 20:45:40.741156   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined IP address 192.168.72.18 and MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:40.741309   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHUsername
	I1001 20:45:40.741545   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHPort
	I1001 20:45:40.741540   81817 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/bridge-983557/id_rsa Username:docker}
	I1001 20:45:40.741740   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHKeyPath
	I1001 20:45:40.741862   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHUsername
	I1001 20:45:40.741968   81817 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/bridge-983557/id_rsa Username:docker}
	I1001 20:45:40.862837   81817 ssh_runner.go:195] Run: systemctl --version
	I1001 20:45:40.869674   81817 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 20:45:41.038651   81817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 20:45:41.045170   81817 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 20:45:41.045250   81817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 20:45:41.063095   81817 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 20:45:41.063120   81817 start.go:495] detecting cgroup driver to use...
	I1001 20:45:41.063191   81817 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 20:45:41.082065   81817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 20:45:41.101021   81817 docker.go:217] disabling cri-docker service (if available) ...
	I1001 20:45:41.101089   81817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 20:45:41.117379   81817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 20:45:41.133787   81817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 20:45:41.279920   81817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 20:45:41.461520   81817 docker.go:233] disabling docker service ...
	I1001 20:45:41.461613   81817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 20:45:41.482131   81817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 20:45:41.500503   81817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 20:45:41.686319   81817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 20:45:41.812858   81817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 20:45:41.828561   81817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 20:45:41.849484   81817 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 20:45:41.849564   81817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:45:41.861445   81817 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 20:45:41.861532   81817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:45:41.872816   81817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:45:41.884053   81817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:45:41.897137   81817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 20:45:41.912801   81817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:45:41.925021   81817 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:45:41.944075   81817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 20:45:41.955994   81817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 20:45:41.966919   81817 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 20:45:41.966991   81817 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 20:45:41.980660   81817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 20:45:41.991886   81817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:45:42.132522   81817 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 20:45:42.237972   81817 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 20:45:42.238053   81817 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 20:45:42.243363   81817 start.go:563] Will wait 60s for crictl version
	I1001 20:45:42.243413   81817 ssh_runner.go:195] Run: which crictl
	I1001 20:45:42.247811   81817 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 20:45:42.296981   81817 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 20:45:42.297072   81817 ssh_runner.go:195] Run: crio --version
	I1001 20:45:42.327017   81817 ssh_runner.go:195] Run: crio --version
	I1001 20:45:42.359379   81817 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 20:45:42.360499   81817 main.go:141] libmachine: (bridge-983557) Calling .GetIP
	I1001 20:45:42.363045   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:42.363443   81817 main.go:141] libmachine: (bridge-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:66:cb", ip: ""} in network mk-bridge-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:45:27 +0000 UTC Type:0 Mac:52:54:00:f5:66:cb Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:bridge-983557 Clientid:01:52:54:00:f5:66:cb}
	I1001 20:45:42.363470   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined IP address 192.168.72.18 and MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:45:42.363701   81817 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1001 20:45:42.368226   81817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:45:42.384625   81817 kubeadm.go:883] updating cluster {Name:bridge-983557 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:bridge-983557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.18 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 20:45:42.384780   81817 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 20:45:42.384847   81817 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:45:42.422371   81817 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1001 20:45:42.422437   81817 ssh_runner.go:195] Run: which lz4
	I1001 20:45:42.426493   81817 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 20:45:42.430639   81817 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 20:45:42.430683   81817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1001 20:45:43.746286   81817 crio.go:462] duration metric: took 1.319684677s to copy over tarball
	I1001 20:45:43.746374   81817 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 20:45:41.545676   80210 node_ready.go:53] node "flannel-983557" has status "Ready":"False"
	I1001 20:45:42.585449   80210 node_ready.go:49] node "flannel-983557" has status "Ready":"True"
	I1001 20:45:42.585485   80210 node_ready.go:38] duration metric: took 7.545152387s for node "flannel-983557" to be "Ready" ...
	I1001 20:45:42.585496   80210 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:45:42.597927   80210 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-n5zfj" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:44.606079   80210 pod_ready.go:103] pod "coredns-7c65d6cfc9-n5zfj" in "kube-system" namespace has status "Ready":"False"
	I1001 20:45:46.048219   81817 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.301809742s)
	I1001 20:45:46.048253   81817 crio.go:469] duration metric: took 2.301931936s to extract the tarball
	I1001 20:45:46.048262   81817 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 20:45:46.092283   81817 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:45:46.143261   81817 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 20:45:46.143287   81817 cache_images.go:84] Images are preloaded, skipping loading
	I1001 20:45:46.143294   81817 kubeadm.go:934] updating node { 192.168.72.18 8443 v1.31.1 crio true true} ...
	I1001 20:45:46.143389   81817 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-983557 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:bridge-983557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1001 20:45:46.143448   81817 ssh_runner.go:195] Run: crio config
	I1001 20:45:46.196160   81817 cni.go:84] Creating CNI manager for "bridge"
	I1001 20:45:46.196185   81817 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 20:45:46.196205   81817 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.18 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-983557 NodeName:bridge-983557 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 20:45:46.196366   81817 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-983557"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.18
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 20:45:46.196433   81817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 20:45:46.207826   81817 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 20:45:46.207892   81817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 20:45:46.219031   81817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1001 20:45:46.237615   81817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 20:45:46.255911   81817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I1001 20:45:46.274183   81817 ssh_runner.go:195] Run: grep 192.168.72.18	control-plane.minikube.internal$ /etc/hosts
	I1001 20:45:46.278408   81817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.18	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:45:46.291936   81817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:45:46.431412   81817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:45:46.449625   81817 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557 for IP: 192.168.72.18
	I1001 20:45:46.449648   81817 certs.go:194] generating shared ca certs ...
	I1001 20:45:46.449671   81817 certs.go:226] acquiring lock for ca certs: {Name:mk4af52da2631512ebee071ecde5dd3fa47a582c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:45:46.449864   81817 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key
	I1001 20:45:46.449929   81817 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key
	I1001 20:45:46.449939   81817 certs.go:256] generating profile certs ...
	I1001 20:45:46.450011   81817 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/client.key
	I1001 20:45:46.450041   81817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/client.crt with IP's: []
	I1001 20:45:46.767078   81817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/client.crt ...
	I1001 20:45:46.767137   81817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/client.crt: {Name:mk0043fbed520efa672f37df419fa8d6a56222a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:45:46.767420   81817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/client.key ...
	I1001 20:45:46.767444   81817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/client.key: {Name:mk0f1617b4511cb1e507996ec3aed5736817bd0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:45:46.767601   81817 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/apiserver.key.1210f02c
	I1001 20:45:46.767635   81817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/apiserver.crt.1210f02c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.18]
	I1001 20:45:46.841025   81817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/apiserver.crt.1210f02c ...
	I1001 20:45:46.841134   81817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/apiserver.crt.1210f02c: {Name:mk5422a0ce58c34d4e0cbb7afd86cd2e0acbdbf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:45:46.841399   81817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/apiserver.key.1210f02c ...
	I1001 20:45:46.841421   81817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/apiserver.key.1210f02c: {Name:mkebc97d5f89e32b800cf23830d277c889e1d73a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:45:46.841517   81817 certs.go:381] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/apiserver.crt.1210f02c -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/apiserver.crt
	I1001 20:45:46.841589   81817 certs.go:385] copying /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/apiserver.key.1210f02c -> /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/apiserver.key
	I1001 20:45:46.841638   81817 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/proxy-client.key
	I1001 20:45:46.841652   81817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/proxy-client.crt with IP's: []
	I1001 20:45:46.899058   81817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/proxy-client.crt ...
	I1001 20:45:46.899087   81817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/proxy-client.crt: {Name:mk0f775e0b8276b24326e92f6811afe4b4a4bbfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:45:46.899302   81817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/proxy-client.key ...
	I1001 20:45:46.899315   81817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/proxy-client.key: {Name:mkecfa4cbc28eb412f542d514441b9ea10870377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:45:46.899550   81817 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem (1338 bytes)
	W1001 20:45:46.899595   81817 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430_empty.pem, impossibly tiny 0 bytes
	I1001 20:45:46.899610   81817 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 20:45:46.899644   81817 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/ca.pem (1082 bytes)
	I1001 20:45:46.899680   81817 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/cert.pem (1123 bytes)
	I1001 20:45:46.899712   81817 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/certs/key.pem (1679 bytes)
	I1001 20:45:46.899763   81817 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem (1708 bytes)
	I1001 20:45:46.900428   81817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 20:45:46.929829   81817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 20:45:46.957385   81817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 20:45:46.988704   81817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 20:45:47.018776   81817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1001 20:45:47.047565   81817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 20:45:47.081359   81817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 20:45:47.107554   81817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/bridge-983557/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1001 20:45:47.143783   81817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 20:45:47.183137   81817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/certs/18430.pem --> /usr/share/ca-certificates/18430.pem (1338 bytes)
	I1001 20:45:47.215872   81817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/ssl/certs/184302.pem --> /usr/share/ca-certificates/184302.pem (1708 bytes)
	I1001 20:45:47.243641   81817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 20:45:47.269441   81817 ssh_runner.go:195] Run: openssl version
	I1001 20:45:47.275944   81817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 20:45:47.287154   81817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:45:47.291769   81817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:45:47.291826   81817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:45:47.297628   81817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 20:45:47.309094   81817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18430.pem && ln -fs /usr/share/ca-certificates/18430.pem /etc/ssl/certs/18430.pem"
	I1001 20:45:47.319884   81817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18430.pem
	I1001 20:45:47.324530   81817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:13 /usr/share/ca-certificates/18430.pem
	I1001 20:45:47.324589   81817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18430.pem
	I1001 20:45:47.331623   81817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18430.pem /etc/ssl/certs/51391683.0"
	I1001 20:45:47.346220   81817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/184302.pem && ln -fs /usr/share/ca-certificates/184302.pem /etc/ssl/certs/184302.pem"
	I1001 20:45:47.358816   81817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/184302.pem
	I1001 20:45:47.363640   81817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:13 /usr/share/ca-certificates/184302.pem
	I1001 20:45:47.363708   81817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/184302.pem
	I1001 20:45:47.370129   81817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/184302.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 20:45:47.381002   81817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 20:45:47.385019   81817 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 20:45:47.385081   81817 kubeadm.go:392] StartCluster: {Name:bridge-983557 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:bridge-983557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.18 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:45:47.385165   81817 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 20:45:47.385221   81817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 20:45:47.424546   81817 cri.go:89] found id: ""
	I1001 20:45:47.424626   81817 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 20:45:47.434348   81817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:45:47.443878   81817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:45:47.452573   81817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:45:47.452633   81817 kubeadm.go:157] found existing configuration files:
	
	I1001 20:45:47.452681   81817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:45:47.461395   81817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:45:47.461476   81817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:45:47.471368   81817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:45:47.480439   81817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:45:47.480513   81817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:45:47.490524   81817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:45:47.500075   81817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:45:47.500132   81817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:45:47.510118   81817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:45:47.519419   81817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:45:47.519495   81817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:45:47.529852   81817 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 20:45:47.585949   81817 kubeadm.go:310] W1001 20:45:47.571481     841 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:45:47.586834   81817 kubeadm.go:310] W1001 20:45:47.572542     841 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 20:45:47.693583   81817 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 20:45:46.606287   80210 pod_ready.go:103] pod "coredns-7c65d6cfc9-n5zfj" in "kube-system" namespace has status "Ready":"False"
	I1001 20:45:49.105970   80210 pod_ready.go:103] pod "coredns-7c65d6cfc9-n5zfj" in "kube-system" namespace has status "Ready":"False"
	I1001 20:45:51.604920   80210 pod_ready.go:103] pod "coredns-7c65d6cfc9-n5zfj" in "kube-system" namespace has status "Ready":"False"
	I1001 20:45:54.105024   80210 pod_ready.go:103] pod "coredns-7c65d6cfc9-n5zfj" in "kube-system" namespace has status "Ready":"False"
	I1001 20:45:58.781868   81817 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 20:45:58.781947   81817 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:45:58.782043   81817 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:45:58.782198   81817 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:45:58.782337   81817 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 20:45:58.782415   81817 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:45:58.783957   81817 out.go:235]   - Generating certificates and keys ...
	I1001 20:45:58.784055   81817 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:45:58.784166   81817 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:45:58.784279   81817 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 20:45:58.784402   81817 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 20:45:58.784494   81817 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 20:45:58.784573   81817 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 20:45:58.784663   81817 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 20:45:58.784844   81817 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-983557 localhost] and IPs [192.168.72.18 127.0.0.1 ::1]
	I1001 20:45:58.784948   81817 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 20:45:58.785095   81817 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-983557 localhost] and IPs [192.168.72.18 127.0.0.1 ::1]
	I1001 20:45:58.785180   81817 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 20:45:58.785276   81817 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 20:45:58.785336   81817 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 20:45:58.785415   81817 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:45:58.785492   81817 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:45:58.785573   81817 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 20:45:58.785641   81817 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:45:58.785722   81817 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:45:58.785813   81817 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:45:58.785919   81817 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:45:58.786019   81817 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:45:58.787452   81817 out.go:235]   - Booting up control plane ...
	I1001 20:45:58.787556   81817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:45:58.787644   81817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:45:58.787731   81817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:45:58.787828   81817 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:45:58.787929   81817 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:45:58.787991   81817 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:45:58.788145   81817 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 20:45:58.788257   81817 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 20:45:58.788310   81817 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.393606ms
	I1001 20:45:58.788397   81817 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 20:45:58.788482   81817 kubeadm.go:310] [api-check] The API server is healthy after 6.003248913s
	I1001 20:45:58.788643   81817 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 20:45:58.788863   81817 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 20:45:58.788954   81817 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 20:45:58.789148   81817 kubeadm.go:310] [mark-control-plane] Marking the node bridge-983557 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 20:45:58.789212   81817 kubeadm.go:310] [bootstrap-token] Using token: jdrqgv.xakkhl8vh8405wbm
	I1001 20:45:58.790525   81817 out.go:235]   - Configuring RBAC rules ...
	I1001 20:45:58.790671   81817 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 20:45:58.790802   81817 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 20:45:58.791010   81817 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 20:45:58.791174   81817 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 20:45:58.791329   81817 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 20:45:58.791443   81817 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 20:45:58.791598   81817 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 20:45:58.791676   81817 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 20:45:58.791745   81817 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 20:45:58.791755   81817 kubeadm.go:310] 
	I1001 20:45:58.791850   81817 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 20:45:58.791878   81817 kubeadm.go:310] 
	I1001 20:45:58.792019   81817 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 20:45:58.792034   81817 kubeadm.go:310] 
	I1001 20:45:58.792066   81817 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 20:45:58.792145   81817 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 20:45:58.792213   81817 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 20:45:58.792222   81817 kubeadm.go:310] 
	I1001 20:45:58.792293   81817 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 20:45:58.792304   81817 kubeadm.go:310] 
	I1001 20:45:58.792380   81817 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 20:45:58.792390   81817 kubeadm.go:310] 
	I1001 20:45:58.792467   81817 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 20:45:58.792568   81817 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 20:45:58.792654   81817 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 20:45:58.792665   81817 kubeadm.go:310] 
	I1001 20:45:58.792769   81817 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 20:45:58.792872   81817 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 20:45:58.792883   81817 kubeadm.go:310] 
	I1001 20:45:58.792993   81817 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jdrqgv.xakkhl8vh8405wbm \
	I1001 20:45:58.793133   81817 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 \
	I1001 20:45:58.793153   81817 kubeadm.go:310] 	--control-plane 
	I1001 20:45:58.793159   81817 kubeadm.go:310] 
	I1001 20:45:58.793228   81817 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 20:45:58.793234   81817 kubeadm.go:310] 
	I1001 20:45:58.793315   81817 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jdrqgv.xakkhl8vh8405wbm \
	I1001 20:45:58.793412   81817 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c56edecb7f9eb74bee0ee499587f93e0ec9eb8fa5d860b84325aacd75ec55c1 
	I1001 20:45:58.793426   81817 cni.go:84] Creating CNI manager for "bridge"
	I1001 20:45:58.794748   81817 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 20:45:58.795841   81817 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 20:45:58.806051   81817 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 20:45:58.824925   81817 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 20:45:58.825001   81817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:45:58.825009   81817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-983557 minikube.k8s.io/updated_at=2024_10_01T20_45_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=bridge-983557 minikube.k8s.io/primary=true
	I1001 20:45:58.858723   81817 ops.go:34] apiserver oom_adj: -16
	I1001 20:45:58.972021   81817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:45:59.472705   81817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:45:59.972814   81817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:45:56.604741   80210 pod_ready.go:103] pod "coredns-7c65d6cfc9-n5zfj" in "kube-system" namespace has status "Ready":"False"
	I1001 20:45:58.605181   80210 pod_ready.go:103] pod "coredns-7c65d6cfc9-n5zfj" in "kube-system" namespace has status "Ready":"False"
	I1001 20:45:59.104223   80210 pod_ready.go:93] pod "coredns-7c65d6cfc9-n5zfj" in "kube-system" namespace has status "Ready":"True"
	I1001 20:45:59.104252   80210 pod_ready.go:82] duration metric: took 16.506296715s for pod "coredns-7c65d6cfc9-n5zfj" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:59.104261   80210 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-983557" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:59.108879   80210 pod_ready.go:93] pod "etcd-flannel-983557" in "kube-system" namespace has status "Ready":"True"
	I1001 20:45:59.108900   80210 pod_ready.go:82] duration metric: took 4.633113ms for pod "etcd-flannel-983557" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:59.108908   80210 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-983557" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:59.112823   80210 pod_ready.go:93] pod "kube-apiserver-flannel-983557" in "kube-system" namespace has status "Ready":"True"
	I1001 20:45:59.112843   80210 pod_ready.go:82] duration metric: took 3.928767ms for pod "kube-apiserver-flannel-983557" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:59.112851   80210 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-983557" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:59.117313   80210 pod_ready.go:93] pod "kube-controller-manager-flannel-983557" in "kube-system" namespace has status "Ready":"True"
	I1001 20:45:59.117337   80210 pod_ready.go:82] duration metric: took 4.478619ms for pod "kube-controller-manager-flannel-983557" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:59.117348   80210 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-sv8kf" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:59.127354   80210 pod_ready.go:93] pod "kube-proxy-sv8kf" in "kube-system" namespace has status "Ready":"True"
	I1001 20:45:59.127382   80210 pod_ready.go:82] duration metric: took 10.025925ms for pod "kube-proxy-sv8kf" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:59.127391   80210 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-983557" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:59.501839   80210 pod_ready.go:93] pod "kube-scheduler-flannel-983557" in "kube-system" namespace has status "Ready":"True"
	I1001 20:45:59.501867   80210 pod_ready.go:82] duration metric: took 374.470504ms for pod "kube-scheduler-flannel-983557" in "kube-system" namespace to be "Ready" ...
	I1001 20:45:59.501880   80210 pod_ready.go:39] duration metric: took 16.916368742s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:45:59.501898   80210 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:45:59.501955   80210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:45:59.517321   80210 api_server.go:72] duration metric: took 25.258336017s to wait for apiserver process to appear ...
	I1001 20:45:59.517350   80210 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:45:59.517374   80210 api_server.go:253] Checking apiserver healthz at https://192.168.39.251:8443/healthz ...
	I1001 20:45:59.524637   80210 api_server.go:279] https://192.168.39.251:8443/healthz returned 200:
	ok
	I1001 20:45:59.526595   80210 api_server.go:141] control plane version: v1.31.1
	I1001 20:45:59.526625   80210 api_server.go:131] duration metric: took 9.266717ms to wait for apiserver health ...
	I1001 20:45:59.526636   80210 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 20:45:59.704780   80210 system_pods.go:59] 7 kube-system pods found
	I1001 20:45:59.704826   80210 system_pods.go:61] "coredns-7c65d6cfc9-n5zfj" [ac3deb3c-4754-4d72-a0e0-fe9739fcf9e1] Running
	I1001 20:45:59.704837   80210 system_pods.go:61] "etcd-flannel-983557" [73006293-05d0-497e-b3e4-78179346ee87] Running
	I1001 20:45:59.704842   80210 system_pods.go:61] "kube-apiserver-flannel-983557" [514471ac-8868-492f-bea8-339fd37ddbc5] Running
	I1001 20:45:59.704848   80210 system_pods.go:61] "kube-controller-manager-flannel-983557" [5dec96b3-f747-43eb-90bf-b96d63691bc6] Running
	I1001 20:45:59.704854   80210 system_pods.go:61] "kube-proxy-sv8kf" [b69b9277-c19f-415b-8e01-f3d39045b9bb] Running
	I1001 20:45:59.704859   80210 system_pods.go:61] "kube-scheduler-flannel-983557" [cadb9e0d-78f6-4ce0-941b-171b55876266] Running
	I1001 20:45:59.704863   80210 system_pods.go:61] "storage-provisioner" [b5119562-5b8e-4b03-bc90-aa6f2d75f436] Running
	I1001 20:45:59.704870   80210 system_pods.go:74] duration metric: took 178.227825ms to wait for pod list to return data ...
	I1001 20:45:59.704878   80210 default_sa.go:34] waiting for default service account to be created ...
	I1001 20:45:59.901147   80210 default_sa.go:45] found service account: "default"
	I1001 20:45:59.901172   80210 default_sa.go:55] duration metric: took 196.287436ms for default service account to be created ...
	I1001 20:45:59.901181   80210 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 20:46:00.104007   80210 system_pods.go:86] 7 kube-system pods found
	I1001 20:46:00.104034   80210 system_pods.go:89] "coredns-7c65d6cfc9-n5zfj" [ac3deb3c-4754-4d72-a0e0-fe9739fcf9e1] Running
	I1001 20:46:00.104041   80210 system_pods.go:89] "etcd-flannel-983557" [73006293-05d0-497e-b3e4-78179346ee87] Running
	I1001 20:46:00.104045   80210 system_pods.go:89] "kube-apiserver-flannel-983557" [514471ac-8868-492f-bea8-339fd37ddbc5] Running
	I1001 20:46:00.104048   80210 system_pods.go:89] "kube-controller-manager-flannel-983557" [5dec96b3-f747-43eb-90bf-b96d63691bc6] Running
	I1001 20:46:00.104058   80210 system_pods.go:89] "kube-proxy-sv8kf" [b69b9277-c19f-415b-8e01-f3d39045b9bb] Running
	I1001 20:46:00.104063   80210 system_pods.go:89] "kube-scheduler-flannel-983557" [cadb9e0d-78f6-4ce0-941b-171b55876266] Running
	I1001 20:46:00.104067   80210 system_pods.go:89] "storage-provisioner" [b5119562-5b8e-4b03-bc90-aa6f2d75f436] Running
	I1001 20:46:00.104075   80210 system_pods.go:126] duration metric: took 202.888087ms to wait for k8s-apps to be running ...
	I1001 20:46:00.104084   80210 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 20:46:00.104133   80210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:46:00.118761   80210 system_svc.go:56] duration metric: took 14.66802ms WaitForService to wait for kubelet
	I1001 20:46:00.118796   80210 kubeadm.go:582] duration metric: took 25.859817958s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:46:00.118818   80210 node_conditions.go:102] verifying NodePressure condition ...
	I1001 20:46:00.302763   80210 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 20:46:00.302793   80210 node_conditions.go:123] node cpu capacity is 2
	I1001 20:46:00.302808   80210 node_conditions.go:105] duration metric: took 183.984372ms to run NodePressure ...
	I1001 20:46:00.302821   80210 start.go:241] waiting for startup goroutines ...
	I1001 20:46:00.302831   80210 start.go:246] waiting for cluster config update ...
	I1001 20:46:00.302846   80210 start.go:255] writing updated cluster config ...
	I1001 20:46:00.303173   80210 ssh_runner.go:195] Run: rm -f paused
	I1001 20:46:00.353974   80210 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 20:46:00.355834   80210 out.go:177] * Done! kubectl is now configured to use "flannel-983557" cluster and "default" namespace by default
	I1001 20:46:00.472832   81817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:46:00.972125   81817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:46:01.473123   81817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:46:01.972139   81817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:46:02.472487   81817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:46:02.973044   81817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 20:46:03.107057   81817 kubeadm.go:1113] duration metric: took 4.282114539s to wait for elevateKubeSystemPrivileges
	I1001 20:46:03.107103   81817 kubeadm.go:394] duration metric: took 15.722023236s to StartCluster
	I1001 20:46:03.107125   81817 settings.go:142] acquiring lock: {Name:mkeb3fe64ed992373f048aef24eb0d675bcab60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:46:03.107212   81817 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:46:03.108443   81817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-11198/kubeconfig: {Name:mk7ef19d546928001abf2478a3abe1d17765a591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:46:03.108682   81817 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.18 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 20:46:03.108693   81817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 20:46:03.108713   81817 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 20:46:03.108866   81817 addons.go:69] Setting storage-provisioner=true in profile "bridge-983557"
	I1001 20:46:03.108884   81817 addons.go:234] Setting addon storage-provisioner=true in "bridge-983557"
	I1001 20:46:03.108898   81817 addons.go:69] Setting default-storageclass=true in profile "bridge-983557"
	I1001 20:46:03.108916   81817 host.go:66] Checking if "bridge-983557" exists ...
	I1001 20:46:03.108935   81817 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-983557"
	I1001 20:46:03.108906   81817 config.go:182] Loaded profile config "bridge-983557": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:46:03.109364   81817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:46:03.109403   81817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:46:03.109372   81817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:46:03.109452   81817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:46:03.110134   81817 out.go:177] * Verifying Kubernetes components...
	I1001 20:46:03.111424   81817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:46:03.131665   81817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36435
	I1001 20:46:03.131713   81817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38221
	I1001 20:46:03.132262   81817 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:46:03.132456   81817 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:46:03.132858   81817 main.go:141] libmachine: Using API Version  1
	I1001 20:46:03.132878   81817 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:46:03.133016   81817 main.go:141] libmachine: Using API Version  1
	I1001 20:46:03.133029   81817 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:46:03.133186   81817 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:46:03.133313   81817 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:46:03.133329   81817 main.go:141] libmachine: (bridge-983557) Calling .GetState
	I1001 20:46:03.133895   81817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:46:03.133920   81817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:46:03.137304   81817 addons.go:234] Setting addon default-storageclass=true in "bridge-983557"
	I1001 20:46:03.137348   81817 host.go:66] Checking if "bridge-983557" exists ...
	I1001 20:46:03.137724   81817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:46:03.137777   81817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:46:03.153516   81817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I1001 20:46:03.153985   81817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34397
	I1001 20:46:03.154024   81817 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:46:03.154346   81817 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:46:03.154890   81817 main.go:141] libmachine: Using API Version  1
	I1001 20:46:03.154911   81817 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:46:03.155083   81817 main.go:141] libmachine: Using API Version  1
	I1001 20:46:03.155106   81817 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:46:03.155316   81817 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:46:03.155462   81817 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:46:03.155602   81817 main.go:141] libmachine: (bridge-983557) Calling .GetState
	I1001 20:46:03.155944   81817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 20:46:03.155978   81817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 20:46:03.157346   81817 main.go:141] libmachine: (bridge-983557) Calling .DriverName
	I1001 20:46:03.159224   81817 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:46:03.160463   81817 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:46:03.160593   81817 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 20:46:03.160615   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHHostname
	I1001 20:46:03.163581   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:46:03.164204   81817 main.go:141] libmachine: (bridge-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:66:cb", ip: ""} in network mk-bridge-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:45:27 +0000 UTC Type:0 Mac:52:54:00:f5:66:cb Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:bridge-983557 Clientid:01:52:54:00:f5:66:cb}
	I1001 20:46:03.164232   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined IP address 192.168.72.18 and MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:46:03.164662   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHPort
	I1001 20:46:03.164937   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHKeyPath
	I1001 20:46:03.165162   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHUsername
	I1001 20:46:03.165421   81817 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/bridge-983557/id_rsa Username:docker}
	I1001 20:46:03.173296   81817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46491
	I1001 20:46:03.174009   81817 main.go:141] libmachine: () Calling .GetVersion
	I1001 20:46:03.174580   81817 main.go:141] libmachine: Using API Version  1
	I1001 20:46:03.174609   81817 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 20:46:03.175018   81817 main.go:141] libmachine: () Calling .GetMachineName
	I1001 20:46:03.175330   81817 main.go:141] libmachine: (bridge-983557) Calling .GetState
	I1001 20:46:03.177430   81817 main.go:141] libmachine: (bridge-983557) Calling .DriverName
	I1001 20:46:03.177636   81817 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 20:46:03.177653   81817 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 20:46:03.177673   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHHostname
	I1001 20:46:03.180752   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:46:03.181372   81817 main.go:141] libmachine: (bridge-983557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:66:cb", ip: ""} in network mk-bridge-983557: {Iface:virbr4 ExpiryTime:2024-10-01 21:45:27 +0000 UTC Type:0 Mac:52:54:00:f5:66:cb Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:bridge-983557 Clientid:01:52:54:00:f5:66:cb}
	I1001 20:46:03.181401   81817 main.go:141] libmachine: (bridge-983557) DBG | domain bridge-983557 has defined IP address 192.168.72.18 and MAC address 52:54:00:f5:66:cb in network mk-bridge-983557
	I1001 20:46:03.181631   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHPort
	I1001 20:46:03.181816   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHKeyPath
	I1001 20:46:03.182009   81817 main.go:141] libmachine: (bridge-983557) Calling .GetSSHUsername
	I1001 20:46:03.182159   81817 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/bridge-983557/id_rsa Username:docker}
	I1001 20:46:03.274216   81817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 20:46:03.305080   81817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:46:03.385891   81817 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 20:46:03.436817   81817 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:46:03.664976   81817 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1001 20:46:03.665180   81817 main.go:141] libmachine: Making call to close driver server
	I1001 20:46:03.665202   81817 main.go:141] libmachine: (bridge-983557) Calling .Close
	I1001 20:46:03.665535   81817 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:46:03.665557   81817 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:46:03.665632   81817 main.go:141] libmachine: Making call to close driver server
	I1001 20:46:03.665679   81817 main.go:141] libmachine: (bridge-983557) Calling .Close
	I1001 20:46:03.665600   81817 main.go:141] libmachine: (bridge-983557) DBG | Closing plugin on server side
	I1001 20:46:03.666343   81817 node_ready.go:35] waiting up to 15m0s for node "bridge-983557" to be "Ready" ...
	I1001 20:46:03.666826   81817 main.go:141] libmachine: (bridge-983557) DBG | Closing plugin on server side
	I1001 20:46:03.666855   81817 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:46:03.666863   81817 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:46:03.691774   81817 node_ready.go:49] node "bridge-983557" has status "Ready":"True"
	I1001 20:46:03.691809   81817 node_ready.go:38] duration metric: took 25.440541ms for node "bridge-983557" to be "Ready" ...
	I1001 20:46:03.691822   81817 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:46:03.702226   81817 main.go:141] libmachine: Making call to close driver server
	I1001 20:46:03.702254   81817 main.go:141] libmachine: (bridge-983557) Calling .Close
	I1001 20:46:03.702602   81817 main.go:141] libmachine: (bridge-983557) DBG | Closing plugin on server side
	I1001 20:46:03.702651   81817 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:46:03.702664   81817 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:46:03.709787   81817 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-9np4g" in "kube-system" namespace to be "Ready" ...
	I1001 20:46:03.903808   81817 main.go:141] libmachine: Making call to close driver server
	I1001 20:46:03.903837   81817 main.go:141] libmachine: (bridge-983557) Calling .Close
	I1001 20:46:03.904150   81817 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:46:03.904209   81817 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:46:03.904228   81817 main.go:141] libmachine: (bridge-983557) DBG | Closing plugin on server side
	I1001 20:46:03.904231   81817 main.go:141] libmachine: Making call to close driver server
	I1001 20:46:03.904318   81817 main.go:141] libmachine: (bridge-983557) Calling .Close
	I1001 20:46:03.904603   81817 main.go:141] libmachine: Successfully made call to close driver server
	I1001 20:46:03.904625   81817 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 20:46:03.904678   81817 main.go:141] libmachine: (bridge-983557) DBG | Closing plugin on server side
	I1001 20:46:03.906150   81817 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1001 20:46:03.907251   81817 addons.go:510] duration metric: took 798.537543ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1001 20:46:04.169719   81817 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-983557" context rescaled to 1 replicas
	I1001 20:46:05.716106   81817 pod_ready.go:103] pod "coredns-7c65d6cfc9-9np4g" in "kube-system" namespace has status "Ready":"False"
	I1001 20:46:07.717598   81817 pod_ready.go:103] pod "coredns-7c65d6cfc9-9np4g" in "kube-system" namespace has status "Ready":"False"
	I1001 20:46:10.216891   81817 pod_ready.go:103] pod "coredns-7c65d6cfc9-9np4g" in "kube-system" namespace has status "Ready":"False"
	I1001 20:46:12.715901   81817 pod_ready.go:103] pod "coredns-7c65d6cfc9-9np4g" in "kube-system" namespace has status "Ready":"False"
	I1001 20:46:14.716687   81817 pod_ready.go:103] pod "coredns-7c65d6cfc9-9np4g" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.453059062Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815579453036811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44e741a7-8928-40e2-9e5e-2a6bdf1ef1a7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.453789867Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e49b3f75-eae4-41f3-b77f-4499ebe44dc3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.453843742Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e49b3f75-eae4-41f3-b77f-4499ebe44dc3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.454653645Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b53d014fc93fa0d3c13ceba3250b8c17ddc9ad02efc11dcbb47175016d6297ff,PodSandboxId:598750c0ae0cb93ab06050ea53cba530205abbf908fc993c5cb87d9894f374d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814891812340675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfc9ed28-f04b-4e57-b8c0-f41849e1fc25,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13a5f7d3522ffc7d818e6263e8be652ef9699e1486880679368868a1f71b564,PodSandboxId:b8a21f346637326021ef7a70f5a232773987fef9f2da2efafce562f52367f6bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814890973191220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8xth8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a6d614d-f16c-46fb-add5-610ac5895e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e01d737bdedb55720ca53291e44205848456f41a907f0173e5922cfcb152f88,PodSandboxId:4b49d180746c05c865b79dc9b53c4701800e0e235b38bf0ffdf3bd16572799a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814890888660452,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-p7wbg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 13fab587-7dc4-41fc-a74c-47372725886d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f3179c90451f3bf47ed5365f8acfe350f4c4869367228a274bc9aed4b567625,PodSandboxId:856d0b0a067384ca0d19d20676b63ca60e34cf228e1862a9a0dca2cbf072ccfc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727814889820907760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-272ln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f2e367f-34c7-4117-bd8e-62b5aa58c7b5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78323440e4e9503b9fb29943c7128695c7518927053b3ad9b42b1aec8791a06d,PodSandboxId:4b842cf4ed836a35d4b86a43bd061253be7012c284075dd31a9a0043e8938f7a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172781487941409928
1,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ecc38e878fb93372a1105602ba5a781,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e865c2ca51f7ac9f6f501addebbe067f008a1aeafe5b80151686573c901539,PodSandboxId:6a1db95ce778961b95683aaab9840b45115917fd22329537f01b5f2bbed37413,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17278148794
14188852,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a542dd8aa2a552cd0f039e06a69c5b4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf139846ac2ddf91d7972ee2fe7b5419a6092ce8690a62daefdc19a587cae285,PodSandboxId:45c80871a3e1d84603784d76845977b542b603e2af717f989c7245339a96ef0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17278
14879363633239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c78d46056165d65e06340ab745db5b2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00b2f009a8ed9caf9c147fe463b4f73e62fcd28260bd2c467e4593a67500fe4,PodSandboxId:fc682544086ad0e29f344297aa932f46f46dfb8be0e8db6bc3d655123c4bf4d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814879353630104,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1a8ab2d4c77a09951889ae8c20de084,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ba7369fdd09ffc169c1a57256c1a30ba40cdfc2d480833758b899fda456d1f,PodSandboxId:b12c9d753b6065d694f81837cbd796620f4501e6cf16b45ab2f59e0b5dbbc3b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727814593018325051,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ecc38e878fb93372a1105602ba5a781,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e49b3f75-eae4-41f3-b77f-4499ebe44dc3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.497335217Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=53b1065b-6b71-4c45-a851-16e65bfe97c9 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.497410177Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=53b1065b-6b71-4c45-a851-16e65bfe97c9 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.498631903Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=15ca297b-a9b1-4df0-a58a-dd45718c2ada name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.499025759Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815579499003560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15ca297b-a9b1-4df0-a58a-dd45718c2ada name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.500143042Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11ab8c84-2967-4669-9caa-a24d3edebbf0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.500199911Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11ab8c84-2967-4669-9caa-a24d3edebbf0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.500455586Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b53d014fc93fa0d3c13ceba3250b8c17ddc9ad02efc11dcbb47175016d6297ff,PodSandboxId:598750c0ae0cb93ab06050ea53cba530205abbf908fc993c5cb87d9894f374d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814891812340675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfc9ed28-f04b-4e57-b8c0-f41849e1fc25,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13a5f7d3522ffc7d818e6263e8be652ef9699e1486880679368868a1f71b564,PodSandboxId:b8a21f346637326021ef7a70f5a232773987fef9f2da2efafce562f52367f6bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814890973191220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8xth8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a6d614d-f16c-46fb-add5-610ac5895e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e01d737bdedb55720ca53291e44205848456f41a907f0173e5922cfcb152f88,PodSandboxId:4b49d180746c05c865b79dc9b53c4701800e0e235b38bf0ffdf3bd16572799a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814890888660452,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-p7wbg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 13fab587-7dc4-41fc-a74c-47372725886d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f3179c90451f3bf47ed5365f8acfe350f4c4869367228a274bc9aed4b567625,PodSandboxId:856d0b0a067384ca0d19d20676b63ca60e34cf228e1862a9a0dca2cbf072ccfc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727814889820907760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-272ln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f2e367f-34c7-4117-bd8e-62b5aa58c7b5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78323440e4e9503b9fb29943c7128695c7518927053b3ad9b42b1aec8791a06d,PodSandboxId:4b842cf4ed836a35d4b86a43bd061253be7012c284075dd31a9a0043e8938f7a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172781487941409928
1,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ecc38e878fb93372a1105602ba5a781,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e865c2ca51f7ac9f6f501addebbe067f008a1aeafe5b80151686573c901539,PodSandboxId:6a1db95ce778961b95683aaab9840b45115917fd22329537f01b5f2bbed37413,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17278148794
14188852,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a542dd8aa2a552cd0f039e06a69c5b4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf139846ac2ddf91d7972ee2fe7b5419a6092ce8690a62daefdc19a587cae285,PodSandboxId:45c80871a3e1d84603784d76845977b542b603e2af717f989c7245339a96ef0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17278
14879363633239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c78d46056165d65e06340ab745db5b2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00b2f009a8ed9caf9c147fe463b4f73e62fcd28260bd2c467e4593a67500fe4,PodSandboxId:fc682544086ad0e29f344297aa932f46f46dfb8be0e8db6bc3d655123c4bf4d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814879353630104,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1a8ab2d4c77a09951889ae8c20de084,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ba7369fdd09ffc169c1a57256c1a30ba40cdfc2d480833758b899fda456d1f,PodSandboxId:b12c9d753b6065d694f81837cbd796620f4501e6cf16b45ab2f59e0b5dbbc3b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727814593018325051,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ecc38e878fb93372a1105602ba5a781,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=11ab8c84-2967-4669-9caa-a24d3edebbf0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.537046116Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e4468728-627d-48d2-a66e-3eab02589094 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.537137718Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e4468728-627d-48d2-a66e-3eab02589094 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.538479371Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ac1b643-a28f-4f24-87cd-f9875afb8bfe name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.538877862Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815579538856154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ac1b643-a28f-4f24-87cd-f9875afb8bfe name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.539416423Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3337edb9-6545-4345-9ae2-90693bd1c877 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.539488263Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3337edb9-6545-4345-9ae2-90693bd1c877 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.539695557Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b53d014fc93fa0d3c13ceba3250b8c17ddc9ad02efc11dcbb47175016d6297ff,PodSandboxId:598750c0ae0cb93ab06050ea53cba530205abbf908fc993c5cb87d9894f374d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814891812340675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfc9ed28-f04b-4e57-b8c0-f41849e1fc25,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13a5f7d3522ffc7d818e6263e8be652ef9699e1486880679368868a1f71b564,PodSandboxId:b8a21f346637326021ef7a70f5a232773987fef9f2da2efafce562f52367f6bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814890973191220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8xth8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a6d614d-f16c-46fb-add5-610ac5895e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e01d737bdedb55720ca53291e44205848456f41a907f0173e5922cfcb152f88,PodSandboxId:4b49d180746c05c865b79dc9b53c4701800e0e235b38bf0ffdf3bd16572799a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814890888660452,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-p7wbg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 13fab587-7dc4-41fc-a74c-47372725886d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f3179c90451f3bf47ed5365f8acfe350f4c4869367228a274bc9aed4b567625,PodSandboxId:856d0b0a067384ca0d19d20676b63ca60e34cf228e1862a9a0dca2cbf072ccfc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727814889820907760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-272ln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f2e367f-34c7-4117-bd8e-62b5aa58c7b5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78323440e4e9503b9fb29943c7128695c7518927053b3ad9b42b1aec8791a06d,PodSandboxId:4b842cf4ed836a35d4b86a43bd061253be7012c284075dd31a9a0043e8938f7a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172781487941409928
1,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ecc38e878fb93372a1105602ba5a781,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e865c2ca51f7ac9f6f501addebbe067f008a1aeafe5b80151686573c901539,PodSandboxId:6a1db95ce778961b95683aaab9840b45115917fd22329537f01b5f2bbed37413,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17278148794
14188852,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a542dd8aa2a552cd0f039e06a69c5b4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf139846ac2ddf91d7972ee2fe7b5419a6092ce8690a62daefdc19a587cae285,PodSandboxId:45c80871a3e1d84603784d76845977b542b603e2af717f989c7245339a96ef0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17278
14879363633239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c78d46056165d65e06340ab745db5b2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00b2f009a8ed9caf9c147fe463b4f73e62fcd28260bd2c467e4593a67500fe4,PodSandboxId:fc682544086ad0e29f344297aa932f46f46dfb8be0e8db6bc3d655123c4bf4d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814879353630104,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1a8ab2d4c77a09951889ae8c20de084,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ba7369fdd09ffc169c1a57256c1a30ba40cdfc2d480833758b899fda456d1f,PodSandboxId:b12c9d753b6065d694f81837cbd796620f4501e6cf16b45ab2f59e0b5dbbc3b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727814593018325051,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ecc38e878fb93372a1105602ba5a781,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3337edb9-6545-4345-9ae2-90693bd1c877 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.571844932Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e42ef3c9-4fac-4027-ae3d-bc8fc5ceafa6 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.571962085Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e42ef3c9-4fac-4027-ae3d-bc8fc5ceafa6 name=/runtime.v1.RuntimeService/Version
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.573829081Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=82d3d20d-bc03-434e-9353-08926f037bca name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.574216659Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815579574195807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82d3d20d-bc03-434e-9353-08926f037bca name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.574791983Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=100d2bad-6b7f-45b1-a4fb-1efceb6a6d82 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.574852960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=100d2bad-6b7f-45b1-a4fb-1efceb6a6d82 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 20:46:19 default-k8s-diff-port-878552 crio[714]: time="2024-10-01 20:46:19.575051170Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b53d014fc93fa0d3c13ceba3250b8c17ddc9ad02efc11dcbb47175016d6297ff,PodSandboxId:598750c0ae0cb93ab06050ea53cba530205abbf908fc993c5cb87d9894f374d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727814891812340675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfc9ed28-f04b-4e57-b8c0-f41849e1fc25,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13a5f7d3522ffc7d818e6263e8be652ef9699e1486880679368868a1f71b564,PodSandboxId:b8a21f346637326021ef7a70f5a232773987fef9f2da2efafce562f52367f6bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814890973191220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8xth8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a6d614d-f16c-46fb-add5-610ac5895e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e01d737bdedb55720ca53291e44205848456f41a907f0173e5922cfcb152f88,PodSandboxId:4b49d180746c05c865b79dc9b53c4701800e0e235b38bf0ffdf3bd16572799a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727814890888660452,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-p7wbg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 13fab587-7dc4-41fc-a74c-47372725886d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f3179c90451f3bf47ed5365f8acfe350f4c4869367228a274bc9aed4b567625,PodSandboxId:856d0b0a067384ca0d19d20676b63ca60e34cf228e1862a9a0dca2cbf072ccfc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727814889820907760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-272ln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f2e367f-34c7-4117-bd8e-62b5aa58c7b5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78323440e4e9503b9fb29943c7128695c7518927053b3ad9b42b1aec8791a06d,PodSandboxId:4b842cf4ed836a35d4b86a43bd061253be7012c284075dd31a9a0043e8938f7a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172781487941409928
1,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ecc38e878fb93372a1105602ba5a781,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e865c2ca51f7ac9f6f501addebbe067f008a1aeafe5b80151686573c901539,PodSandboxId:6a1db95ce778961b95683aaab9840b45115917fd22329537f01b5f2bbed37413,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17278148794
14188852,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a542dd8aa2a552cd0f039e06a69c5b4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf139846ac2ddf91d7972ee2fe7b5419a6092ce8690a62daefdc19a587cae285,PodSandboxId:45c80871a3e1d84603784d76845977b542b603e2af717f989c7245339a96ef0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17278
14879363633239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c78d46056165d65e06340ab745db5b2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00b2f009a8ed9caf9c147fe463b4f73e62fcd28260bd2c467e4593a67500fe4,PodSandboxId:fc682544086ad0e29f344297aa932f46f46dfb8be0e8db6bc3d655123c4bf4d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727814879353630104,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1a8ab2d4c77a09951889ae8c20de084,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ba7369fdd09ffc169c1a57256c1a30ba40cdfc2d480833758b899fda456d1f,PodSandboxId:b12c9d753b6065d694f81837cbd796620f4501e6cf16b45ab2f59e0b5dbbc3b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727814593018325051,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-878552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ecc38e878fb93372a1105602ba5a781,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=100d2bad-6b7f-45b1-a4fb-1efceb6a6d82 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b53d014fc93fa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 minutes ago      Running             storage-provisioner       0                   598750c0ae0cb       storage-provisioner
	b13a5f7d3522f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   11 minutes ago      Running             coredns                   0                   b8a21f3466373       coredns-7c65d6cfc9-8xth8
	7e01d737bdedb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   11 minutes ago      Running             coredns                   0                   4b49d180746c0       coredns-7c65d6cfc9-p7wbg
	5f3179c90451f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   11 minutes ago      Running             kube-proxy                0                   856d0b0a06738       kube-proxy-272ln
	e9e865c2ca51f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   11 minutes ago      Running             kube-controller-manager   2                   6a1db95ce7789       kube-controller-manager-default-k8s-diff-port-878552
	78323440e4e95       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   11 minutes ago      Running             kube-apiserver            2                   4b842cf4ed836       kube-apiserver-default-k8s-diff-port-878552
	cf139846ac2dd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   11 minutes ago      Running             etcd                      2                   45c80871a3e1d       etcd-default-k8s-diff-port-878552
	d00b2f009a8ed       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   11 minutes ago      Running             kube-scheduler            2                   fc682544086ad       kube-scheduler-default-k8s-diff-port-878552
	90ba7369fdd09       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   16 minutes ago      Exited              kube-apiserver            1                   b12c9d753b606       kube-apiserver-default-k8s-diff-port-878552
	
	
	==> coredns [7e01d737bdedb55720ca53291e44205848456f41a907f0173e5922cfcb152f88] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [b13a5f7d3522ffc7d818e6263e8be652ef9699e1486880679368868a1f71b564] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-878552
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-878552
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=default-k8s-diff-port-878552
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T20_34_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 20:34:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-878552
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 20:46:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 20:45:06 +0000   Tue, 01 Oct 2024 20:34:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 20:45:06 +0000   Tue, 01 Oct 2024 20:34:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 20:45:06 +0000   Tue, 01 Oct 2024 20:34:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 20:45:06 +0000   Tue, 01 Oct 2024 20:34:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.4
	  Hostname:    default-k8s-diff-port-878552
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 16251e0cf7b04633be33e6ffa535a6a6
	  System UUID:                16251e0c-f7b0-4633-be33-e6ffa535a6a6
	  Boot ID:                    d0f8220a-f43b-4b0a-8271-fa5e5ab0d62f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-8xth8                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     11m
	  kube-system                 coredns-7c65d6cfc9-p7wbg                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     11m
	  kube-system                 etcd-default-k8s-diff-port-878552                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kube-apiserver-default-k8s-diff-port-878552             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-878552    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-272ln                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-default-k8s-diff-port-878552             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-6867b74b74-75m4s                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         11m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 11m   kube-proxy       
	  Normal  Starting                 11m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m   kubelet          Node default-k8s-diff-port-878552 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m   kubelet          Node default-k8s-diff-port-878552 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m   kubelet          Node default-k8s-diff-port-878552 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m   node-controller  Node default-k8s-diff-port-878552 event: Registered Node default-k8s-diff-port-878552 in Controller
	
	
	==> dmesg <==
	[  +0.053617] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039786] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.884480] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.886563] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.466562] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.378007] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.067603] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.080377] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.196728] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.123874] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.304203] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[  +3.984537] systemd-fstab-generator[795]: Ignoring "noauto" option for root device
	[  +2.164540] systemd-fstab-generator[914]: Ignoring "noauto" option for root device
	[  +0.064099] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.534744] kauditd_printk_skb: 69 callbacks suppressed
	[Oct 1 20:30] kauditd_printk_skb: 85 callbacks suppressed
	[Oct 1 20:34] systemd-fstab-generator[2567]: Ignoring "noauto" option for root device
	[  +0.063611] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.485904] systemd-fstab-generator[2884]: Ignoring "noauto" option for root device
	[  +0.080290] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.075473] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.323939] systemd-fstab-generator[3072]: Ignoring "noauto" option for root device
	[  +4.679912] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [cf139846ac2ddf91d7972ee2fe7b5419a6092ce8690a62daefdc19a587cae285] <==
	{"level":"warn","ts":"2024-10-01T20:44:29.541515Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T20:44:29.238481Z","time spent":"302.954974ms","remote":"127.0.0.1:40630","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":600,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-878552\" mod_revision:912 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-878552\" value_size:531 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-878552\" > >"}
	{"level":"warn","ts":"2024-10-01T20:44:29.935923Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.766263ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T20:44:29.935997Z","caller":"traceutil/trace.go:171","msg":"trace[232459337] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:920; }","duration":"225.859795ms","start":"2024-10-01T20:44:29.710128Z","end":"2024-10-01T20:44:29.935988Z","steps":["trace[232459337] 'range keys from in-memory index tree'  (duration: 225.706675ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:44:29.935922Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.354702ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T20:44:29.936216Z","caller":"traceutil/trace.go:171","msg":"trace[581433171] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:920; }","duration":"216.65998ms","start":"2024-10-01T20:44:29.719544Z","end":"2024-10-01T20:44:29.936204Z","steps":["trace[581433171] 'range keys from in-memory index tree'  (duration: 216.27062ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T20:44:40.555045Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":687}
	{"level":"info","ts":"2024-10-01T20:44:40.567461Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":687,"took":"12.05242ms","hash":937600412,"current-db-size-bytes":2297856,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2297856,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-10-01T20:44:40.567530Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":937600412,"revision":687,"compact-revision":-1}
	{"level":"warn","ts":"2024-10-01T20:45:19.544762Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15059353574629307833,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-10-01T20:45:19.546809Z","caller":"traceutil/trace.go:171","msg":"trace[1297346833] linearizableReadLoop","detail":"{readStateIndex:1105; appliedIndex:1104; }","duration":"502.414946ms","start":"2024-10-01T20:45:19.044364Z","end":"2024-10-01T20:45:19.546779Z","steps":["trace[1297346833] 'read index received'  (duration: 502.2374ms)","trace[1297346833] 'applied index is now lower than readState.Index'  (duration: 177.116µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-01T20:45:19.547116Z","caller":"traceutil/trace.go:171","msg":"trace[535682878] transaction","detail":"{read_only:false; response_revision:963; number_of_response:1; }","duration":"505.498077ms","start":"2024-10-01T20:45:19.041606Z","end":"2024-10-01T20:45:19.547104Z","steps":["trace[535682878] 'process raft request'  (duration: 505.072963ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:45:19.547997Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T20:45:19.041574Z","time spent":"506.137947ms","remote":"127.0.0.1:40542","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:962 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-01T20:45:19.824207Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.399971ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15059353574629307836 > lease_revoke:<id:50fd9249cac9f558>","response":"size:27"}
	{"level":"info","ts":"2024-10-01T20:45:19.824363Z","caller":"traceutil/trace.go:171","msg":"trace[708064122] linearizableReadLoop","detail":"{readStateIndex:1106; appliedIndex:1105; }","duration":"116.167602ms","start":"2024-10-01T20:45:19.708180Z","end":"2024-10-01T20:45:19.824348Z","steps":["trace[708064122] 'read index received'  (duration: 38.581µs)","trace[708064122] 'applied index is now lower than readState.Index'  (duration: 116.127628ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-01T20:45:19.824615Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.418423ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T20:45:19.824662Z","caller":"traceutil/trace.go:171","msg":"trace[1682122059] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:963; }","duration":"116.477933ms","start":"2024-10-01T20:45:19.708175Z","end":"2024-10-01T20:45:19.824653Z","steps":["trace[1682122059] 'agreement among raft nodes before linearized reading'  (duration: 116.395768ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:45:19.824852Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.436005ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T20:45:19.824888Z","caller":"traceutil/trace.go:171","msg":"trace[1872700423] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:963; }","duration":"105.473511ms","start":"2024-10-01T20:45:19.719409Z","end":"2024-10-01T20:45:19.824882Z","steps":["trace[1872700423] 'agreement among raft nodes before linearized reading'  (duration: 105.414678ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T20:45:45.864520Z","caller":"traceutil/trace.go:171","msg":"trace[889256030] linearizableReadLoop","detail":"{readStateIndex:1132; appliedIndex:1131; }","duration":"171.507457ms","start":"2024-10-01T20:45:45.692999Z","end":"2024-10-01T20:45:45.864507Z","steps":["trace[889256030] 'read index received'  (duration: 171.368499ms)","trace[889256030] 'applied index is now lower than readState.Index'  (duration: 138.545µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-01T20:45:45.864865Z","caller":"traceutil/trace.go:171","msg":"trace[1240445849] transaction","detail":"{read_only:false; response_revision:984; number_of_response:1; }","duration":"172.245671ms","start":"2024-10-01T20:45:45.692605Z","end":"2024-10-01T20:45:45.864850Z","steps":["trace[1240445849] 'process raft request'  (duration: 171.806281ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:45:45.864985Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.916672ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T20:45:45.865063Z","caller":"traceutil/trace.go:171","msg":"trace[728776308] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:984; }","duration":"158.075233ms","start":"2024-10-01T20:45:45.706977Z","end":"2024-10-01T20:45:45.865053Z","steps":["trace[728776308] 'agreement among raft nodes before linearized reading'  (duration: 157.85297ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T20:45:45.865210Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.203895ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2024-10-01T20:45:45.865308Z","caller":"traceutil/trace.go:171","msg":"trace[429052145] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:984; }","duration":"172.303745ms","start":"2024-10-01T20:45:45.692994Z","end":"2024-10-01T20:45:45.865298Z","steps":["trace[429052145] 'agreement among raft nodes before linearized reading'  (duration: 172.159785ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T20:45:48.029704Z","caller":"traceutil/trace.go:171","msg":"trace[1776667962] transaction","detail":"{read_only:false; response_revision:986; number_of_response:1; }","duration":"114.9366ms","start":"2024-10-01T20:45:47.914728Z","end":"2024-10-01T20:45:48.029665Z","steps":["trace[1776667962] 'process raft request'  (duration: 114.430188ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:46:19 up 16 min,  0 users,  load average: 0.38, 0.25, 0.15
	Linux default-k8s-diff-port-878552 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [78323440e4e9503b9fb29943c7128695c7518927053b3ad9b42b1aec8791a06d] <==
	I1001 20:42:43.082744       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1001 20:42:43.082755       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1001 20:44:42.080978       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:44:42.081528       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1001 20:44:43.083985       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:44:43.084084       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1001 20:44:43.084155       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:44:43.084215       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1001 20:44:43.085285       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1001 20:44:43.085333       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1001 20:45:43.085823       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:45:43.085912       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1001 20:45:43.086283       1 handler_proxy.go:99] no RequestInfo found in the context
	E1001 20:45:43.086451       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1001 20:45:43.087112       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1001 20:45:43.088348       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [90ba7369fdd09ffc169c1a57256c1a30ba40cdfc2d480833758b899fda456d1f] <==
	W1001 20:34:32.962840       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.022743       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.069281       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.201295       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.220163       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.238871       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.282984       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.303725       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.333862       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.377032       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.382492       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.390141       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.441686       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.534763       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.594004       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.610520       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.682412       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.721800       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.779607       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.821015       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.828493       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.862357       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:33.964553       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:34.053428       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1001 20:34:34.099899       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [e9e865c2ca51f7ac9f6f501addebbe067f008a1aeafe5b80151686573c901539] <==
	E1001 20:41:18.979537       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:41:19.550767       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:41:48.986747       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:41:49.560134       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:42:18.994487       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:42:19.575782       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:42:49.001980       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:42:49.585396       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:43:19.010127       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:43:19.596227       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:43:49.017907       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:43:49.609228       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:44:19.025128       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:44:19.624686       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:44:49.034043       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:44:49.633780       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1001 20:45:06.970900       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-878552"
	E1001 20:45:19.042504       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:45:19.645454       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1001 20:45:49.051208       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:45:49.655807       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1001 20:45:57.923135       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="178.403µs"
	I1001 20:46:09.920852       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="81.653µs"
	E1001 20:46:19.058781       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1001 20:46:19.678623       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [5f3179c90451f3bf47ed5365f8acfe350f4c4869367228a274bc9aed4b567625] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 20:34:50.274174       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 20:34:50.301186       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.4"]
	E1001 20:34:50.301312       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 20:34:50.379295       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 20:34:50.379353       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 20:34:50.379381       1 server_linux.go:169] "Using iptables Proxier"
	I1001 20:34:50.389552       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 20:34:50.389890       1 server.go:483] "Version info" version="v1.31.1"
	I1001 20:34:50.389914       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 20:34:50.394729       1 config.go:199] "Starting service config controller"
	I1001 20:34:50.394786       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 20:34:50.394818       1 config.go:105] "Starting endpoint slice config controller"
	I1001 20:34:50.394822       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 20:34:50.398198       1 config.go:328] "Starting node config controller"
	I1001 20:34:50.399054       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 20:34:50.495188       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 20:34:50.495266       1 shared_informer.go:320] Caches are synced for service config
	I1001 20:34:50.499750       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d00b2f009a8ed9caf9c147fe463b4f73e62fcd28260bd2c467e4593a67500fe4] <==
	W1001 20:34:42.096178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1001 20:34:42.097466       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:34:42.949684       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 20:34:42.950174       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:34:42.961060       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 20:34:42.961223       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:34:43.008286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1001 20:34:43.008351       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 20:34:43.021863       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1001 20:34:43.023488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 20:34:43.109610       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1001 20:34:43.109688       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:34:43.115753       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1001 20:34:43.115848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:34:43.118986       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1001 20:34:43.119065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:34:43.279611       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1001 20:34:43.279812       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1001 20:34:43.335071       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1001 20:34:43.335179       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 20:34:43.350960       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1001 20:34:43.351080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 20:34:43.353463       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1001 20:34:43.353546       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1001 20:34:46.157590       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 01 20:45:17 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:45:17.902883    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-75m4s" podUID="c8eb4eb3-ea7f-4a88-8c1f-e3331f44ae53"
	Oct 01 20:45:25 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:45:25.078149    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815525077722526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:45:25 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:45:25.078215    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815525077722526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:45:32 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:45:32.902801    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-75m4s" podUID="c8eb4eb3-ea7f-4a88-8c1f-e3331f44ae53"
	Oct 01 20:45:35 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:45:35.080450    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815535080010479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:45:35 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:45:35.080507    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815535080010479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:45:43 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:45:43.920057    2891 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 01 20:45:43 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:45:43.920556    2891 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 01 20:45:43 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:45:43.920951    2891 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x9bw6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-75m4s_kube-system(c8eb4eb3-ea7f-4a88-8c1f-e3331f44ae53): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Oct 01 20:45:43 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:45:43.922482    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-75m4s" podUID="c8eb4eb3-ea7f-4a88-8c1f-e3331f44ae53"
	Oct 01 20:45:44 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:45:44.934713    2891 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 20:45:44 default-k8s-diff-port-878552 kubelet[2891]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 20:45:44 default-k8s-diff-port-878552 kubelet[2891]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 20:45:44 default-k8s-diff-port-878552 kubelet[2891]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 20:45:44 default-k8s-diff-port-878552 kubelet[2891]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 20:45:45 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:45:45.081851    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815545081599463,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:45:45 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:45:45.081876    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815545081599463,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:45:55 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:45:55.084148    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815555083662477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:45:55 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:45:55.084689    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815555083662477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:45:57 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:45:57.902681    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-75m4s" podUID="c8eb4eb3-ea7f-4a88-8c1f-e3331f44ae53"
	Oct 01 20:46:05 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:46:05.089386    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815565086545737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:46:05 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:46:05.089423    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815565086545737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:46:09 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:46:09.902852    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-75m4s" podUID="c8eb4eb3-ea7f-4a88-8c1f-e3331f44ae53"
	Oct 01 20:46:15 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:46:15.091864    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815575091321862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 20:46:15 default-k8s-diff-port-878552 kubelet[2891]: E1001 20:46:15.091926    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727815575091321862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [b53d014fc93fa0d3c13ceba3250b8c17ddc9ad02efc11dcbb47175016d6297ff] <==
	I1001 20:34:51.930775       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 20:34:51.947934       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 20:34:51.948094       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 20:34:51.956579       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 20:34:51.956769       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-878552_28b27df9-336d-4270-b7ee-fabafab5d940!
	I1001 20:34:51.957530       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f62159ef-15bf-4a2f-99b1-e8da4f3add22", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-878552_28b27df9-336d-4270-b7ee-fabafab5d940 became leader
	I1001 20:34:52.060007       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-878552_28b27df9-336d-4270-b7ee-fabafab5d940!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-878552 -n default-k8s-diff-port-878552
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-878552 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-75m4s
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-878552 describe pod metrics-server-6867b74b74-75m4s
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-878552 describe pod metrics-server-6867b74b74-75m4s: exit status 1 (64.147434ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-75m4s" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-878552 describe pod metrics-server-6867b74b74-75m4s: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (138.11s)

                                                
                                    

Test pass (241/311)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 22.17
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 12.09
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 1.2
22 TestOffline 88.37
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 130.92
31 TestAddons/serial/GCPAuth/Namespaces 2.15
33 TestAddons/parallel/Registry 18.81
35 TestAddons/parallel/InspektorGadget 11.08
38 TestAddons/parallel/CSI 47.33
39 TestAddons/parallel/Headlamp 19.13
40 TestAddons/parallel/CloudSpanner 5.62
41 TestAddons/parallel/LocalPath 58.05
42 TestAddons/parallel/NvidiaDevicePlugin 6.54
43 TestAddons/parallel/Yakd 11.69
45 TestCertOptions 66.03
46 TestCertExpiration 272.34
48 TestForceSystemdFlag 76.28
49 TestForceSystemdEnv 58.27
51 TestKVMDriverInstallOrUpdate 5.17
55 TestErrorSpam/setup 40.55
56 TestErrorSpam/start 0.35
57 TestErrorSpam/status 0.76
58 TestErrorSpam/pause 1.55
59 TestErrorSpam/unpause 1.68
60 TestErrorSpam/stop 4.11
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 81.77
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 30.7
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.07
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.12
72 TestFunctional/serial/CacheCmd/cache/add_local 2.33
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.83
77 TestFunctional/serial/CacheCmd/cache/delete 0.09
78 TestFunctional/serial/MinikubeKubectlCmd 0.1
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
80 TestFunctional/serial/ExtraConfig 35.48
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 1.32
83 TestFunctional/serial/LogsFileCmd 1.32
84 TestFunctional/serial/InvalidService 4.17
86 TestFunctional/parallel/ConfigCmd 0.32
87 TestFunctional/parallel/DashboardCmd 13.32
88 TestFunctional/parallel/DryRun 0.26
89 TestFunctional/parallel/InternationalLanguage 0.14
90 TestFunctional/parallel/StatusCmd 0.71
94 TestFunctional/parallel/ServiceCmdConnect 8.54
95 TestFunctional/parallel/AddonsCmd 0.12
98 TestFunctional/parallel/SSHCmd 0.39
99 TestFunctional/parallel/CpCmd 1.3
100 TestFunctional/parallel/MySQL 22.85
101 TestFunctional/parallel/FileSync 0.2
102 TestFunctional/parallel/CertSync 1.36
106 TestFunctional/parallel/NodeLabels 0.06
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
110 TestFunctional/parallel/License 0.59
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
114 TestFunctional/parallel/Version/short 0.05
115 TestFunctional/parallel/Version/components 0.61
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.35
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.21
121 TestFunctional/parallel/ImageCommands/Setup 1.97
122 TestFunctional/parallel/ServiceCmd/DeployApp 23.18
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.36
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.14
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.8
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 4.01
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.73
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.17
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.61
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
140 TestFunctional/parallel/ProfileCmd/profile_list 0.3
141 TestFunctional/parallel/ServiceCmd/List 0.43
142 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
143 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
144 TestFunctional/parallel/MountCmd/any-port 7.37
145 TestFunctional/parallel/ServiceCmd/HTTPS 0.28
146 TestFunctional/parallel/ServiceCmd/Format 0.3
147 TestFunctional/parallel/ServiceCmd/URL 0.37
148 TestFunctional/parallel/MountCmd/specific-port 2.18
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.71
150 TestFunctional/delete_echo-server_images 0.04
151 TestFunctional/delete_my-image_image 0.02
152 TestFunctional/delete_minikube_cached_images 0.02
156 TestMultiControlPlane/serial/StartCluster 198.34
157 TestMultiControlPlane/serial/DeployApp 6.81
158 TestMultiControlPlane/serial/PingHostFromPods 1.2
159 TestMultiControlPlane/serial/AddWorkerNode 59.29
160 TestMultiControlPlane/serial/NodeLabels 0.06
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
162 TestMultiControlPlane/serial/CopyFile 12.73
168 TestMultiControlPlane/serial/DeleteSecondaryNode 16.79
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.63
171 TestMultiControlPlane/serial/RestartCluster 256.42
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
173 TestMultiControlPlane/serial/AddSecondaryNode 79.92
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
178 TestJSONOutput/start/Command 52.52
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.66
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.6
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 6.67
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.2
206 TestMainNoArgs 0.04
207 TestMinikubeProfile 84.16
210 TestMountStart/serial/StartWithMountFirst 30.57
211 TestMountStart/serial/VerifyMountFirst 0.37
212 TestMountStart/serial/StartWithMountSecond 24.1
213 TestMountStart/serial/VerifyMountSecond 0.37
214 TestMountStart/serial/DeleteFirst 0.91
215 TestMountStart/serial/VerifyMountPostDelete 0.37
216 TestMountStart/serial/Stop 1.28
217 TestMountStart/serial/RestartStopped 22.5
218 TestMountStart/serial/VerifyMountPostStop 0.38
221 TestMultiNode/serial/FreshStart2Nodes 107.39
222 TestMultiNode/serial/DeployApp2Nodes 5.9
223 TestMultiNode/serial/PingHostFrom2Pods 0.79
224 TestMultiNode/serial/AddNode 52.56
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.6
227 TestMultiNode/serial/CopyFile 7.57
228 TestMultiNode/serial/StopNode 2.29
229 TestMultiNode/serial/StartAfterStop 40.03
231 TestMultiNode/serial/DeleteNode 2.1
233 TestMultiNode/serial/RestartMultiNode 178.99
234 TestMultiNode/serial/ValidateNameConflict 42.19
241 TestScheduledStopUnix 114.75
245 TestRunningBinaryUpgrade 207.72
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
259 TestPause/serial/Start 81.18
260 TestNoKubernetes/serial/StartWithK8s 96.01
262 TestNoKubernetes/serial/StartWithStopK8s 41.46
263 TestNoKubernetes/serial/Start 27.89
271 TestNetworkPlugins/group/false 5.73
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
276 TestNoKubernetes/serial/ProfileList 28.15
277 TestNoKubernetes/serial/Stop 1.35
278 TestNoKubernetes/serial/StartNoArgs 35.96
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
280 TestStoppedBinaryUpgrade/Setup 2.26
281 TestStoppedBinaryUpgrade/Upgrade 138.54
284 TestStoppedBinaryUpgrade/MinikubeLogs 0.85
286 TestStartStop/group/no-preload/serial/FirstStart 69.54
288 TestStartStop/group/embed-certs/serial/FirstStart 52.9
289 TestStartStop/group/no-preload/serial/DeployApp 11.3
290 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
292 TestStartStop/group/embed-certs/serial/DeployApp 10.26
293 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.93
298 TestStartStop/group/no-preload/serial/SecondStart 655.81
301 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 338.46
302 TestStartStop/group/embed-certs/serial/SecondStart 605.97
303 TestStartStop/group/old-k8s-version/serial/Stop 6.3
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
306 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.31
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.05
310 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 618.51
319 TestStartStop/group/newest-cni/serial/FirstStart 45.14
320 TestStartStop/group/newest-cni/serial/DeployApp 0
321 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.09
322 TestStartStop/group/newest-cni/serial/Stop 10.37
323 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
324 TestStartStop/group/newest-cni/serial/SecondStart 36.52
325 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
326 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
327 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
328 TestStartStop/group/newest-cni/serial/Pause 2.47
329 TestNetworkPlugins/group/auto/Start 56.84
330 TestNetworkPlugins/group/kindnet/Start 78.47
331 TestNetworkPlugins/group/calico/Start 77.23
332 TestNetworkPlugins/group/auto/KubeletFlags 0.23
333 TestNetworkPlugins/group/auto/NetCatPod 11.3
334 TestNetworkPlugins/group/auto/DNS 0.21
335 TestNetworkPlugins/group/auto/Localhost 0.16
336 TestNetworkPlugins/group/auto/HairPin 0.17
337 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
338 TestNetworkPlugins/group/custom-flannel/Start 75.88
339 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
340 TestNetworkPlugins/group/kindnet/NetCatPod 13.25
341 TestNetworkPlugins/group/kindnet/DNS 0.16
342 TestNetworkPlugins/group/kindnet/Localhost 0.12
343 TestNetworkPlugins/group/kindnet/HairPin 0.13
344 TestNetworkPlugins/group/enable-default-cni/Start 92.4
346 TestNetworkPlugins/group/calico/ControllerPod 6.01
347 TestNetworkPlugins/group/calico/KubeletFlags 0.23
348 TestNetworkPlugins/group/calico/NetCatPod 11.25
349 TestNetworkPlugins/group/calico/DNS 0.19
350 TestNetworkPlugins/group/calico/Localhost 0.14
351 TestNetworkPlugins/group/calico/HairPin 0.15
352 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
353 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.32
354 TestNetworkPlugins/group/flannel/Start 74.91
355 TestNetworkPlugins/group/custom-flannel/DNS 0.17
356 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
357 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
358 TestNetworkPlugins/group/bridge/Start 96.65
359 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
360 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.23
361 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
362 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
363 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
364 TestNetworkPlugins/group/flannel/ControllerPod 6.01
365 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
366 TestNetworkPlugins/group/flannel/NetCatPod 10.27
367 TestNetworkPlugins/group/flannel/DNS 0.17
368 TestNetworkPlugins/group/flannel/Localhost 0.13
369 TestNetworkPlugins/group/flannel/HairPin 0.13
370 TestNetworkPlugins/group/bridge/KubeletFlags 0.19
371 TestNetworkPlugins/group/bridge/NetCatPod 12.26
372 TestNetworkPlugins/group/bridge/DNS 0.16
373 TestNetworkPlugins/group/bridge/Localhost 0.12
374 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (22.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-333407 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-333407 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (22.168208297s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (22.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1001 18:54:31.255888   18430 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1001 18:54:31.255980   18430 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-333407
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-333407: exit status 85 (53.176203ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-333407 | jenkins | v1.34.0 | 01 Oct 24 18:54 UTC |          |
	|         | -p download-only-333407        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 18:54:09
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 18:54:09.124891   18442 out.go:345] Setting OutFile to fd 1 ...
	I1001 18:54:09.125031   18442 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 18:54:09.125039   18442 out.go:358] Setting ErrFile to fd 2...
	I1001 18:54:09.125047   18442 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 18:54:09.125231   18442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	W1001 18:54:09.125365   18442 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19736-11198/.minikube/config/config.json: open /home/jenkins/minikube-integration/19736-11198/.minikube/config/config.json: no such file or directory
	I1001 18:54:09.125970   18442 out.go:352] Setting JSON to true
	I1001 18:54:09.126889   18442 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2191,"bootTime":1727806658,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 18:54:09.127000   18442 start.go:139] virtualization: kvm guest
	I1001 18:54:09.129376   18442 out.go:97] [download-only-333407] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1001 18:54:09.129478   18442 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball: no such file or directory
	I1001 18:54:09.129514   18442 notify.go:220] Checking for updates...
	I1001 18:54:09.130905   18442 out.go:169] MINIKUBE_LOCATION=19736
	I1001 18:54:09.132212   18442 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 18:54:09.133470   18442 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 18:54:09.134715   18442 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 18:54:09.135929   18442 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1001 18:54:09.138182   18442 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1001 18:54:09.138420   18442 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 18:54:09.238410   18442 out.go:97] Using the kvm2 driver based on user configuration
	I1001 18:54:09.238453   18442 start.go:297] selected driver: kvm2
	I1001 18:54:09.238465   18442 start.go:901] validating driver "kvm2" against <nil>
	I1001 18:54:09.238830   18442 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 18:54:09.238970   18442 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 18:54:09.254052   18442 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 18:54:09.254111   18442 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 18:54:09.254647   18442 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1001 18:54:09.254813   18442 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 18:54:09.254847   18442 cni.go:84] Creating CNI manager for ""
	I1001 18:54:09.254900   18442 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 18:54:09.254911   18442 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 18:54:09.254979   18442 start.go:340] cluster config:
	{Name:download-only-333407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-333407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:54:09.255189   18442 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 18:54:09.256801   18442 out.go:97] Downloading VM boot image ...
	I1001 18:54:09.256844   18442 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19736-11198/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 18:54:18.520759   18442 out.go:97] Starting "download-only-333407" primary control-plane node in "download-only-333407" cluster
	I1001 18:54:18.520783   18442 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1001 18:54:18.619459   18442 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1001 18:54:18.619488   18442 cache.go:56] Caching tarball of preloaded images
	I1001 18:54:18.619643   18442 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1001 18:54:18.621223   18442 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1001 18:54:18.621246   18442 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1001 18:54:18.720733   18442 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-333407 host does not exist
	  To start a cluster, run: "minikube start -p download-only-333407"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-333407
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (12.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-195954 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-195954 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.088119005s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (12.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1001 18:54:43.651695   18430 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I1001 18:54:43.651742   18430 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-195954
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-195954: exit status 85 (56.148925ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-333407 | jenkins | v1.34.0 | 01 Oct 24 18:54 UTC |                     |
	|         | -p download-only-333407        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 01 Oct 24 18:54 UTC | 01 Oct 24 18:54 UTC |
	| delete  | -p download-only-333407        | download-only-333407 | jenkins | v1.34.0 | 01 Oct 24 18:54 UTC | 01 Oct 24 18:54 UTC |
	| start   | -o=json --download-only        | download-only-195954 | jenkins | v1.34.0 | 01 Oct 24 18:54 UTC |                     |
	|         | -p download-only-195954        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 18:54:31
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 18:54:31.599766   18685 out.go:345] Setting OutFile to fd 1 ...
	I1001 18:54:31.600020   18685 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 18:54:31.600029   18685 out.go:358] Setting ErrFile to fd 2...
	I1001 18:54:31.600033   18685 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 18:54:31.600189   18685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 18:54:31.600765   18685 out.go:352] Setting JSON to true
	I1001 18:54:31.601545   18685 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2214,"bootTime":1727806658,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 18:54:31.601634   18685 start.go:139] virtualization: kvm guest
	I1001 18:54:31.603570   18685 out.go:97] [download-only-195954] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 18:54:31.603760   18685 notify.go:220] Checking for updates...
	I1001 18:54:31.604895   18685 out.go:169] MINIKUBE_LOCATION=19736
	I1001 18:54:31.606132   18685 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 18:54:31.607284   18685 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 18:54:31.608466   18685 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 18:54:31.609580   18685 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1001 18:54:31.611406   18685 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1001 18:54:31.611612   18685 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 18:54:31.642570   18685 out.go:97] Using the kvm2 driver based on user configuration
	I1001 18:54:31.642597   18685 start.go:297] selected driver: kvm2
	I1001 18:54:31.642604   18685 start.go:901] validating driver "kvm2" against <nil>
	I1001 18:54:31.643034   18685 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 18:54:31.643139   18685 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19736-11198/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 18:54:31.657902   18685 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 18:54:31.657955   18685 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 18:54:31.658521   18685 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1001 18:54:31.658661   18685 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 18:54:31.658689   18685 cni.go:84] Creating CNI manager for ""
	I1001 18:54:31.658738   18685 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 18:54:31.658761   18685 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 18:54:31.658817   18685 start.go:340] cluster config:
	{Name:download-only-195954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-195954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:54:31.658909   18685 iso.go:125] acquiring lock: {Name:mk06aa0d5182c9fbfa5e40313025cbec2b4400a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 18:54:31.660686   18685 out.go:97] Starting "download-only-195954" primary control-plane node in "download-only-195954" cluster
	I1001 18:54:31.660707   18685 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 18:54:32.172091   18685 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 18:54:32.172138   18685 cache.go:56] Caching tarball of preloaded images
	I1001 18:54:32.172297   18685 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 18:54:32.174141   18685 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1001 18:54:32.174166   18685 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I1001 18:54:32.272308   18685 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19736-11198/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-195954 host does not exist
	  To start a cluster, run: "minikube start -p download-only-195954"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-195954
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (1.2s)

                                                
                                                
=== RUN   TestBinaryMirror
I1001 18:54:44.215739   18430 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-213993 --alsologtostderr --binary-mirror http://127.0.0.1:46019 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-213993" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-213993
--- PASS: TestBinaryMirror (1.20s)

                                                
                                    
x
+
TestOffline (88.37s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-770413 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-770413 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m27.36452309s)
helpers_test.go:175: Cleaning up "offline-crio-770413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-770413
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-770413: (1.008967923s)
--- PASS: TestOffline (88.37s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:932: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-800266
addons_test.go:932: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-800266: exit status 85 (46.621452ms)

                                                
                                                
-- stdout --
	* Profile "addons-800266" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-800266"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:943: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-800266
addons_test.go:943: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-800266: exit status 85 (47.400514ms)

                                                
                                                
-- stdout --
	* Profile "addons-800266" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-800266"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (130.92s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-800266 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-800266 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m10.920476494s)
--- PASS: TestAddons/Setup (130.92s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (2.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-800266 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-800266 get secret gcp-auth -n new-namespace
addons_test.go:583: (dbg) Non-zero exit: kubectl --context addons-800266 get secret gcp-auth -n new-namespace: exit status 1 (98.293596ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:575: (dbg) Run:  kubectl --context addons-800266 logs -l app=gcp-auth -n gcp-auth
I1001 18:56:56.905544   18430 retry.go:31] will retry after 1.867002256s: %!w(<nil>): gcp-auth container logs: 
-- stdout --
	2024/10/01 18:56:55 GCP Auth Webhook started!
	2024/10/01 18:56:56 Ready to marshal response ...
	2024/10/01 18:56:56 Ready to write response ...

                                                
                                                
-- /stdout --
addons_test.go:583: (dbg) Run:  kubectl --context addons-800266 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (2.15s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.573614ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-s7g57" [973537c4-844f-4bcc-addb-882999c8dbbe] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002905066s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-tpcpz" [41439ce9-e054-4a4f-ab24-294daf5ce65a] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004535316s
addons_test.go:331: (dbg) Run:  kubectl --context addons-800266 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-800266 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-800266 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.904240879s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-800266 ip
2024/10/01 19:05:28 [DEBUG] GET http://192.168.39.56:5000
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-800266 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.81s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.08s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:756: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9m8jv" [93d65b8c-1986-44e4-a794-83c17f9fd337] Running
addons_test.go:756: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004889166s
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-800266 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-amd64 -p addons-800266 addons disable inspektor-gadget --alsologtostderr -v=1: (6.076081809s)
--- PASS: TestAddons/parallel/InspektorGadget (11.08s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.33s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1001 19:05:29.120285   18430 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1001 19:05:29.126028   18430 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1001 19:05:29.126069   18430 kapi.go:107] duration metric: took 5.795588ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 5.807156ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-800266 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-800266 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-800266 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-800266 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-800266 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-800266 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-800266 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-800266 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-800266 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-800266 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-800266 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-800266 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-800266 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-800266 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b7bda3ee-45d8-4d1d-b861-26b35d0b7d17] Pending
helpers_test.go:344: "task-pv-pod" [b7bda3ee-45d8-4d1d-b861-26b35d0b7d17] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b7bda3ee-45d8-4d1d-b861-26b35d0b7d17] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004603038s
addons_test.go:511: (dbg) Run:  kubectl --context addons-800266 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-800266 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-800266 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-800266 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-800266 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-800266 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-800266 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-800266 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-800266 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-800266 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [575646ba-0714-43ff-84db-64681a170979] Pending
helpers_test.go:344: "task-pv-pod-restore" [575646ba-0714-43ff-84db-64681a170979] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [575646ba-0714-43ff-84db-64681a170979] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004063293s
addons_test.go:553: (dbg) Run:  kubectl --context addons-800266 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-800266 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-800266 delete volumesnapshot new-snapshot-demo
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-800266 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-800266 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-amd64 -p addons-800266 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.86939866s)
--- PASS: TestAddons/parallel/CSI (47.33s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:741: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-800266 --alsologtostderr -v=1
addons_test.go:746: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-qbd8q" [250738e0-af26-4ac7-ae75-17ee476ec93f] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-qbd8q" [250738e0-af26-4ac7-ae75-17ee476ec93f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-qbd8q" [250738e0-af26-4ac7-ae75-17ee476ec93f] Running
addons_test.go:746: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.010647489s
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-800266 addons disable headlamp --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-amd64 -p addons-800266 addons disable headlamp --alsologtostderr -v=1: (6.246041562s)
--- PASS: TestAddons/parallel/Headlamp (19.13s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:773: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-v8k84" [a6932552-841d-4d06-84af-d278497cf71f] Running
addons_test.go:773: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003634371s
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-800266 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.05s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:881: (dbg) Run:  kubectl --context addons-800266 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:887: (dbg) Run:  kubectl --context addons-800266 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:891: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-800266 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-800266 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-800266 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-800266 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-800266 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-800266 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-800266 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-800266 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9f699b1e-8388-4cf9-bcaa-a2d5526c6d87] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9f699b1e-8388-4cf9-bcaa-a2d5526c6d87] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9f699b1e-8388-4cf9-bcaa-a2d5526c6d87] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.010805208s
addons_test.go:899: (dbg) Run:  kubectl --context addons-800266 get pvc test-pvc -o=json
addons_test.go:908: (dbg) Run:  out/minikube-linux-amd64 -p addons-800266 ssh "cat /opt/local-path-provisioner/pvc-8cdb206c-3008-4806-8f7b-043e61fbf684_default_test-pvc/file1"
addons_test.go:920: (dbg) Run:  kubectl --context addons-800266 delete pod test-local-path
addons_test.go:924: (dbg) Run:  kubectl --context addons-800266 delete pvc test-pvc
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-800266 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-amd64 -p addons-800266 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.19940834s)
--- PASS: TestAddons/parallel/LocalPath (58.05s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:956: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-brmgb" [8958de05-2c3e-499b-9290-48c68cef124f] Running
addons_test.go:956: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004879069s
addons_test.go:959: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-800266
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:967: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-hkw4p" [3d91f6f6-2903-4708-9bc4-03e03fffa147] Running
addons_test.go:967: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003322938s
addons_test.go:971: (dbg) Run:  out/minikube-linux-amd64 -p addons-800266 addons disable yakd --alsologtostderr -v=1
addons_test.go:971: (dbg) Done: out/minikube-linux-amd64 -p addons-800266 addons disable yakd --alsologtostderr -v=1: (5.688770862s)
--- PASS: TestAddons/parallel/Yakd (11.69s)

                                                
                                    
x
+
TestCertOptions (66.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-432128 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-432128 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m4.587420949s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-432128 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-432128 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-432128 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-432128" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-432128
--- PASS: TestCertOptions (66.03s)

                                                
                                    
x
+
TestCertExpiration (272.34s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-402897 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-402897 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (52.0188114s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-402897 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-402897 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (39.329549916s)
helpers_test.go:175: Cleaning up "cert-expiration-402897" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-402897
--- PASS: TestCertExpiration (272.34s)

                                                
                                    
x
+
TestForceSystemdFlag (76.28s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-265488 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-265488 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m14.31867064s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-265488 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-265488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-265488
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-265488: (1.740933421s)
--- PASS: TestForceSystemdFlag (76.28s)

                                                
                                    
x
+
TestForceSystemdEnv (58.27s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-528861 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-528861 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (56.956094866s)
helpers_test.go:175: Cleaning up "force-systemd-env-528861" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-528861
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-528861: (1.31327605s)
--- PASS: TestForceSystemdEnv (58.27s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.17s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1001 20:08:09.845340   18430 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1001 20:08:09.845464   18430 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1001 20:08:09.879846   18430 install.go:62] docker-machine-driver-kvm2: exit status 1
W1001 20:08:09.880147   18430 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1001 20:08:09.880196   18430 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2600807137/001/docker-machine-driver-kvm2
I1001 20:08:10.085094   18430 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2600807137/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640] Decompressors:map[bz2:0xc000808f30 gz:0xc000808f38 tar:0xc000808ee0 tar.bz2:0xc000808ef0 tar.gz:0xc000808f00 tar.xz:0xc000808f10 tar.zst:0xc000808f20 tbz2:0xc000808ef0 tgz:0xc000808f00 txz:0xc000808f10 tzst:0xc000808f20 xz:0xc000808f40 zip:0xc000808f50 zst:0xc000808f48] Getters:map[file:0xc001749480 http:0xc000ba2a50 https:0xc000ba2aa0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1001 20:08:10.085137   18430 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2600807137/001/docker-machine-driver-kvm2
I1001 20:08:12.870908   18430 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1001 20:08:12.871028   18430 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1001 20:08:12.900816   18430 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1001 20:08:12.900867   18430 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1001 20:08:12.900954   18430 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1001 20:08:12.900993   18430 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2600807137/002/docker-machine-driver-kvm2
I1001 20:08:12.936613   18430 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2600807137/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640] Decompressors:map[bz2:0xc000808f30 gz:0xc000808f38 tar:0xc000808ee0 tar.bz2:0xc000808ef0 tar.gz:0xc000808f00 tar.xz:0xc000808f10 tar.zst:0xc000808f20 tbz2:0xc000808ef0 tgz:0xc000808f00 txz:0xc000808f10 tzst:0xc000808f20 xz:0xc000808f40 zip:0xc000808f50 zst:0xc000808f48] Getters:map[file:0xc001600220 http:0xc000b26410 https:0xc000b26460] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1001 20:08:12.936663   18430 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2600807137/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (5.17s)

                                                
                                    
x
+
TestErrorSpam/setup (40.55s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-130269 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-130269 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-130269 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-130269 --driver=kvm2  --container-runtime=crio: (40.5539983s)
--- PASS: TestErrorSpam/setup (40.55s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130269 --log_dir /tmp/nospam-130269 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130269 --log_dir /tmp/nospam-130269 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130269 --log_dir /tmp/nospam-130269 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130269 --log_dir /tmp/nospam-130269 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130269 --log_dir /tmp/nospam-130269 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130269 --log_dir /tmp/nospam-130269 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130269 --log_dir /tmp/nospam-130269 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130269 --log_dir /tmp/nospam-130269 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130269 --log_dir /tmp/nospam-130269 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130269 --log_dir /tmp/nospam-130269 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130269 --log_dir /tmp/nospam-130269 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130269 --log_dir /tmp/nospam-130269 unpause
--- PASS: TestErrorSpam/unpause (1.68s)

                                                
                                    
x
+
TestErrorSpam/stop (4.11s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130269 --log_dir /tmp/nospam-130269 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-130269 --log_dir /tmp/nospam-130269 stop: (1.582478732s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130269 --log_dir /tmp/nospam-130269 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-130269 --log_dir /tmp/nospam-130269 stop: (1.285727038s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-130269 --log_dir /tmp/nospam-130269 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-130269 --log_dir /tmp/nospam-130269 stop: (1.244583907s)
--- PASS: TestErrorSpam/stop (4.11s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19736-11198/.minikube/files/etc/test/nested/copy/18430/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.77s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-338309 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-338309 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m21.769395836s)
--- PASS: TestFunctional/serial/StartWithProxy (81.77s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.7s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1001 19:15:12.059878   18430 config.go:182] Loaded profile config "functional-338309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-338309 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-338309 --alsologtostderr -v=8: (30.696305556s)
functional_test.go:663: soft start took 30.696980814s for "functional-338309" cluster.
I1001 19:15:42.756541   18430 config.go:182] Loaded profile config "functional-338309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (30.70s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-338309 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-338309 cache add registry.k8s.io/pause:3.1: (1.373813379s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-338309 cache add registry.k8s.io/pause:3.3: (1.328036982s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-338309 cache add registry.k8s.io/pause:latest: (1.418568789s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-338309 /tmp/TestFunctionalserialCacheCmdcacheadd_local1527052080/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 cache add minikube-local-cache-test:functional-338309
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-338309 cache add minikube-local-cache-test:functional-338309: (1.99411288s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 cache delete minikube-local-cache-test:functional-338309
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-338309
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-338309 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (207.788352ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-338309 cache reload: (1.162126426s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 kubectl -- --context functional-338309 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-338309 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.48s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-338309 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-338309 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.475571518s)
functional_test.go:761: restart took 35.475716614s for "functional-338309" cluster.
I1001 19:16:27.237763   18430 config.go:182] Loaded profile config "functional-338309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (35.48s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-338309 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-338309 logs: (1.322562688s)
--- PASS: TestFunctional/serial/LogsCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 logs --file /tmp/TestFunctionalserialLogsFileCmd2294065517/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-338309 logs --file /tmp/TestFunctionalserialLogsFileCmd2294065517/001/logs.txt: (1.315304523s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.17s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-338309 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-338309
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-338309: exit status 115 (267.710596ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.74:30594 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-338309 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.17s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-338309 config get cpus: exit status 14 (53.023898ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-338309 config get cpus: exit status 14 (46.209407ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-338309 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-338309 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 29291: os: process already finished
E1001 19:17:19.521068   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:17:40.002645   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:18:20.964549   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:19:42.886263   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/DashboardCmd (13.32s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-338309 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-338309 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (134.850992ms)

                                                
                                                
-- stdout --
	* [functional-338309] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 19:17:00.517249   29020 out.go:345] Setting OutFile to fd 1 ...
	I1001 19:17:00.517463   29020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:17:00.517476   29020 out.go:358] Setting ErrFile to fd 2...
	I1001 19:17:00.517480   29020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:17:00.517649   29020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 19:17:00.518350   29020 out.go:352] Setting JSON to false
	I1001 19:17:00.519354   29020 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3562,"bootTime":1727806658,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 19:17:00.519447   29020 start.go:139] virtualization: kvm guest
	I1001 19:17:00.521255   29020 out.go:177] * [functional-338309] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 19:17:00.522608   29020 notify.go:220] Checking for updates...
	I1001 19:17:00.522624   29020 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 19:17:00.524175   29020 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 19:17:00.525561   29020 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:17:00.526704   29020 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:17:00.528019   29020 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 19:17:00.529349   29020 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 19:17:00.530947   29020 config.go:182] Loaded profile config "functional-338309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:17:00.531328   29020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:17:00.531368   29020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:17:00.547656   29020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46717
	I1001 19:17:00.548083   29020 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:17:00.548652   29020 main.go:141] libmachine: Using API Version  1
	I1001 19:17:00.548687   29020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:17:00.549002   29020 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:17:00.549168   29020 main.go:141] libmachine: (functional-338309) Calling .DriverName
	I1001 19:17:00.549375   29020 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 19:17:00.549663   29020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:17:00.549695   29020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:17:00.566627   29020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46601
	I1001 19:17:00.567142   29020 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:17:00.567667   29020 main.go:141] libmachine: Using API Version  1
	I1001 19:17:00.567688   29020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:17:00.568024   29020 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:17:00.568273   29020 main.go:141] libmachine: (functional-338309) Calling .DriverName
	I1001 19:17:00.602179   29020 out.go:177] * Using the kvm2 driver based on existing profile
	I1001 19:17:00.603334   29020 start.go:297] selected driver: kvm2
	I1001 19:17:00.603350   29020 start.go:901] validating driver "kvm2" against &{Name:functional-338309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-338309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.74 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:17:00.603481   29020 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 19:17:00.605656   29020 out.go:201] 
	W1001 19:17:00.607068   29020 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1001 19:17:00.608354   29020 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-338309 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-338309 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-338309 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (139.658712ms)

                                                
                                                
-- stdout --
	* [functional-338309] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 19:17:00.781991   29076 out.go:345] Setting OutFile to fd 1 ...
	I1001 19:17:00.782091   29076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:17:00.782096   29076 out.go:358] Setting ErrFile to fd 2...
	I1001 19:17:00.782099   29076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:17:00.782378   29076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 19:17:00.782911   29076 out.go:352] Setting JSON to false
	I1001 19:17:00.783819   29076 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3563,"bootTime":1727806658,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 19:17:00.783920   29076 start.go:139] virtualization: kvm guest
	I1001 19:17:00.786117   29076 out.go:177] * [functional-338309] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1001 19:17:00.787450   29076 notify.go:220] Checking for updates...
	I1001 19:17:00.787466   29076 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 19:17:00.788781   29076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 19:17:00.790031   29076 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 19:17:00.791344   29076 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 19:17:00.792443   29076 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 19:17:00.793675   29076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 19:17:00.795180   29076 config.go:182] Loaded profile config "functional-338309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:17:00.795565   29076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:17:00.795609   29076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:17:00.811248   29076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38525
	I1001 19:17:00.811775   29076 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:17:00.812484   29076 main.go:141] libmachine: Using API Version  1
	I1001 19:17:00.812506   29076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:17:00.812862   29076 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:17:00.813033   29076 main.go:141] libmachine: (functional-338309) Calling .DriverName
	I1001 19:17:00.813277   29076 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 19:17:00.813656   29076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:17:00.813693   29076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:17:00.829159   29076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41081
	I1001 19:17:00.829782   29076 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:17:00.830357   29076 main.go:141] libmachine: Using API Version  1
	I1001 19:17:00.830378   29076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:17:00.830801   29076 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:17:00.831018   29076 main.go:141] libmachine: (functional-338309) Calling .DriverName
	I1001 19:17:00.870264   29076 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1001 19:17:00.871461   29076 start.go:297] selected driver: kvm2
	I1001 19:17:00.871481   29076 start.go:901] validating driver "kvm2" against &{Name:functional-338309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-338309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.74 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:17:00.871628   29076 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 19:17:00.873981   29076 out.go:201] 
	W1001 19:17:00.875279   29076 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1001 19:17:00.876374   29076 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-338309 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-338309 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-9mtcl" [abc7b6fd-f718-461c-bb6a-f17f81d11687] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-9mtcl" [abc7b6fd-f718-461c-bb6a-f17f81d11687] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004132413s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.50.74:31061
functional_test.go:1675: http://192.168.50.74:31061: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-9mtcl

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.74:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.74:31061
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh -n functional-338309 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 cp functional-338309:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3099424314/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh -n functional-338309 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh -n functional-338309 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-338309 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-rkbcv" [16385339-46f2-4a5e-ac4b-5c53d81e7422] Pending
helpers_test.go:344: "mysql-6cdb49bbb-rkbcv" [16385339-46f2-4a5e-ac4b-5c53d81e7422] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-rkbcv" [16385339-46f2-4a5e-ac4b-5c53d81e7422] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.00383998s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-338309 exec mysql-6cdb49bbb-rkbcv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-338309 exec mysql-6cdb49bbb-rkbcv -- mysql -ppassword -e "show databases;": exit status 1 (138.355351ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1001 19:16:55.982148   18430 retry.go:31] will retry after 1.384285494s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-338309 exec mysql-6cdb49bbb-rkbcv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.85s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/18430/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh "sudo cat /etc/test/nested/copy/18430/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/18430.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh "sudo cat /etc/ssl/certs/18430.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/18430.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh "sudo cat /usr/share/ca-certificates/18430.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/184302.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh "sudo cat /etc/ssl/certs/184302.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/184302.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh "sudo cat /usr/share/ca-certificates/184302.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-338309 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-338309 ssh "sudo systemctl is-active docker": exit status 1 (232.379773ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-338309 ssh "sudo systemctl is-active containerd": exit status 1 (213.235399ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-338309 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-338309
localhost/kicbase/echo-server:functional-338309
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-338309 image ls --format short --alsologtostderr:
I1001 19:17:02.587521   29300 out.go:345] Setting OutFile to fd 1 ...
I1001 19:17:02.587656   29300 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 19:17:02.587670   29300 out.go:358] Setting ErrFile to fd 2...
I1001 19:17:02.587676   29300 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 19:17:02.587936   29300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
I1001 19:17:02.588855   29300 config.go:182] Loaded profile config "functional-338309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 19:17:02.589017   29300 config.go:182] Loaded profile config "functional-338309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 19:17:02.589560   29300 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 19:17:02.589615   29300 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 19:17:02.605166   29300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37329
I1001 19:17:02.605704   29300 main.go:141] libmachine: () Calling .GetVersion
I1001 19:17:02.606418   29300 main.go:141] libmachine: Using API Version  1
I1001 19:17:02.606455   29300 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 19:17:02.606890   29300 main.go:141] libmachine: () Calling .GetMachineName
I1001 19:17:02.607167   29300 main.go:141] libmachine: (functional-338309) Calling .GetState
I1001 19:17:02.609548   29300 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 19:17:02.609602   29300 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 19:17:02.625507   29300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41343
I1001 19:17:02.626019   29300 main.go:141] libmachine: () Calling .GetVersion
I1001 19:17:02.626572   29300 main.go:141] libmachine: Using API Version  1
I1001 19:17:02.626588   29300 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 19:17:02.626953   29300 main.go:141] libmachine: () Calling .GetMachineName
I1001 19:17:02.627177   29300 main.go:141] libmachine: (functional-338309) Calling .DriverName
I1001 19:17:02.627453   29300 ssh_runner.go:195] Run: systemctl --version
I1001 19:17:02.627493   29300 main.go:141] libmachine: (functional-338309) Calling .GetSSHHostname
I1001 19:17:02.630962   29300 main.go:141] libmachine: (functional-338309) DBG | domain functional-338309 has defined MAC address 52:54:00:d1:ac:15 in network mk-functional-338309
I1001 19:17:02.631438   29300 main.go:141] libmachine: (functional-338309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ac:15", ip: ""} in network mk-functional-338309: {Iface:virbr1 ExpiryTime:2024-10-01 20:14:04 +0000 UTC Type:0 Mac:52:54:00:d1:ac:15 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:functional-338309 Clientid:01:52:54:00:d1:ac:15}
I1001 19:17:02.631479   29300 main.go:141] libmachine: (functional-338309) DBG | domain functional-338309 has defined IP address 192.168.50.74 and MAC address 52:54:00:d1:ac:15 in network mk-functional-338309
I1001 19:17:02.631625   29300 main.go:141] libmachine: (functional-338309) Calling .GetSSHPort
I1001 19:17:02.631823   29300 main.go:141] libmachine: (functional-338309) Calling .GetSSHKeyPath
I1001 19:17:02.631988   29300 main.go:141] libmachine: (functional-338309) Calling .GetSSHUsername
I1001 19:17:02.632138   29300 sshutil.go:53] new ssh client: &{IP:192.168.50.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/functional-338309/id_rsa Username:docker}
I1001 19:17:02.760711   29300 ssh_runner.go:195] Run: sudo crictl images --output json
I1001 19:17:02.881748   29300 main.go:141] libmachine: Making call to close driver server
I1001 19:17:02.881764   29300 main.go:141] libmachine: (functional-338309) Calling .Close
I1001 19:17:02.882049   29300 main.go:141] libmachine: Successfully made call to close driver server
I1001 19:17:02.882069   29300 main.go:141] libmachine: Making call to close connection to plugin binary
I1001 19:17:02.882096   29300 main.go:141] libmachine: (functional-338309) DBG | Closing plugin on server side
I1001 19:17:02.882160   29300 main.go:141] libmachine: Making call to close driver server
I1001 19:17:02.882188   29300 main.go:141] libmachine: (functional-338309) Calling .Close
I1001 19:17:02.882418   29300 main.go:141] libmachine: Successfully made call to close driver server
I1001 19:17:02.882446   29300 main.go:141] libmachine: (functional-338309) DBG | Closing plugin on server side
I1001 19:17:02.882452   29300 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-338309 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/kicbase/echo-server           | functional-338309  | 9056ab77afb8e | 4.94MB |
| localhost/my-image                      | functional-338309  | 302df390070b5 | 1.47MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-338309  | fe8054c597be5 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-338309 image ls --format table --alsologtostderr:
I1001 19:17:07.630379   29636 out.go:345] Setting OutFile to fd 1 ...
I1001 19:17:07.630630   29636 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 19:17:07.630638   29636 out.go:358] Setting ErrFile to fd 2...
I1001 19:17:07.630642   29636 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 19:17:07.630837   29636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
I1001 19:17:07.631519   29636 config.go:182] Loaded profile config "functional-338309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 19:17:07.631616   29636 config.go:182] Loaded profile config "functional-338309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 19:17:07.631996   29636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 19:17:07.632035   29636 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 19:17:07.647056   29636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40851
I1001 19:17:07.647610   29636 main.go:141] libmachine: () Calling .GetVersion
I1001 19:17:07.648301   29636 main.go:141] libmachine: Using API Version  1
I1001 19:17:07.648327   29636 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 19:17:07.648658   29636 main.go:141] libmachine: () Calling .GetMachineName
I1001 19:17:07.648820   29636 main.go:141] libmachine: (functional-338309) Calling .GetState
I1001 19:17:07.650832   29636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 19:17:07.650894   29636 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 19:17:07.667304   29636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33311
I1001 19:17:07.667812   29636 main.go:141] libmachine: () Calling .GetVersion
I1001 19:17:07.668318   29636 main.go:141] libmachine: Using API Version  1
I1001 19:17:07.668334   29636 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 19:17:07.668721   29636 main.go:141] libmachine: () Calling .GetMachineName
I1001 19:17:07.668897   29636 main.go:141] libmachine: (functional-338309) Calling .DriverName
I1001 19:17:07.669114   29636 ssh_runner.go:195] Run: systemctl --version
I1001 19:17:07.669138   29636 main.go:141] libmachine: (functional-338309) Calling .GetSSHHostname
I1001 19:17:07.672387   29636 main.go:141] libmachine: (functional-338309) DBG | domain functional-338309 has defined MAC address 52:54:00:d1:ac:15 in network mk-functional-338309
I1001 19:17:07.672780   29636 main.go:141] libmachine: (functional-338309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ac:15", ip: ""} in network mk-functional-338309: {Iface:virbr1 ExpiryTime:2024-10-01 20:14:04 +0000 UTC Type:0 Mac:52:54:00:d1:ac:15 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:functional-338309 Clientid:01:52:54:00:d1:ac:15}
I1001 19:17:07.672812   29636 main.go:141] libmachine: (functional-338309) DBG | domain functional-338309 has defined IP address 192.168.50.74 and MAC address 52:54:00:d1:ac:15 in network mk-functional-338309
I1001 19:17:07.672931   29636 main.go:141] libmachine: (functional-338309) Calling .GetSSHPort
I1001 19:17:07.673105   29636 main.go:141] libmachine: (functional-338309) Calling .GetSSHKeyPath
I1001 19:17:07.673271   29636 main.go:141] libmachine: (functional-338309) Calling .GetSSHUsername
I1001 19:17:07.673539   29636 sshutil.go:53] new ssh client: &{IP:192.168.50.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/functional-338309/id_rsa Username:docker}
I1001 19:17:07.768889   29636 ssh_runner.go:195] Run: sudo crictl images --output json
I1001 19:17:07.887206   29636 main.go:141] libmachine: Making call to close driver server
I1001 19:17:07.887230   29636 main.go:141] libmachine: (functional-338309) Calling .Close
I1001 19:17:07.887534   29636 main.go:141] libmachine: (functional-338309) DBG | Closing plugin on server side
I1001 19:17:07.887554   29636 main.go:141] libmachine: Successfully made call to close driver server
I1001 19:17:07.887565   29636 main.go:141] libmachine: Making call to close connection to plugin binary
I1001 19:17:07.887578   29636 main.go:141] libmachine: Making call to close driver server
I1001 19:17:07.887588   29636 main.go:141] libmachine: (functional-338309) Calling .Close
I1001 19:17:07.887835   29636 main.go:141] libmachine: Successfully made call to close driver server
I1001 19:17:07.887853   29636 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-338309 image ls --format json --alsologtostderr:
[{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"247dcced9c6a776f0526ee96b1dceec69bc46e31f4ca71f77c4efd55506b0f4b","repoDigests":["docker.io/library/9aaab297928cabfd3843d3a2c40bb78b5282ae104f9d044fe0a9e6c51b90e12c-tmp@sha256:925edabb545972ad91180766405a319bb2cff0c83261fae8a3bbd0b7badd76c4"],"repoTags":[],"size":"1466018"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d1
66501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5
f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d
0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-338309"],"size":"4943877"},{"id":"fe8054c597be5a2b2dc95d70c5f4289a861ae5db71222b01be0c8e997cf1ef79","repoDigests":["localhost/minikube-local-cache-test@sha256:f2e86b37bd7e71c570bef2461b62f6b0cc24bf132e
3a83a320d65b9ed978940a"],"repoTags":["localhost/minikube-local-cache-test:functional-338309"],"size":"3330"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/
kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"302df390070b5614836d6523d262d067521fda81fc3c9f970a22bd7a7e91713b","repoDigests":["localhost/my-image@sha256:5a975507ed17ce58918c0e6e004752bcfdcfd18f24a3dafd39656e1aeaf9ce40"],"repoTags":["localhost/my-image:functional-338309"],"size":"1468600"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8d
d876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-338309 image ls --format json --alsologtostderr:
I1001 19:17:07.359194   29594 out.go:345] Setting OutFile to fd 1 ...
I1001 19:17:07.359469   29594 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 19:17:07.359479   29594 out.go:358] Setting ErrFile to fd 2...
I1001 19:17:07.359485   29594 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 19:17:07.359761   29594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
I1001 19:17:07.360617   29594 config.go:182] Loaded profile config "functional-338309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 19:17:07.360736   29594 config.go:182] Loaded profile config "functional-338309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 19:17:07.361231   29594 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 19:17:07.361280   29594 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 19:17:07.376456   29594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34865
I1001 19:17:07.376953   29594 main.go:141] libmachine: () Calling .GetVersion
I1001 19:17:07.377530   29594 main.go:141] libmachine: Using API Version  1
I1001 19:17:07.377557   29594 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 19:17:07.377917   29594 main.go:141] libmachine: () Calling .GetMachineName
I1001 19:17:07.378128   29594 main.go:141] libmachine: (functional-338309) Calling .GetState
I1001 19:17:07.380260   29594 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 19:17:07.380324   29594 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 19:17:07.395769   29594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
I1001 19:17:07.396346   29594 main.go:141] libmachine: () Calling .GetVersion
I1001 19:17:07.396919   29594 main.go:141] libmachine: Using API Version  1
I1001 19:17:07.396951   29594 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 19:17:07.397293   29594 main.go:141] libmachine: () Calling .GetMachineName
I1001 19:17:07.397483   29594 main.go:141] libmachine: (functional-338309) Calling .DriverName
I1001 19:17:07.397686   29594 ssh_runner.go:195] Run: systemctl --version
I1001 19:17:07.397743   29594 main.go:141] libmachine: (functional-338309) Calling .GetSSHHostname
I1001 19:17:07.400902   29594 main.go:141] libmachine: (functional-338309) DBG | domain functional-338309 has defined MAC address 52:54:00:d1:ac:15 in network mk-functional-338309
I1001 19:17:07.401399   29594 main.go:141] libmachine: (functional-338309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ac:15", ip: ""} in network mk-functional-338309: {Iface:virbr1 ExpiryTime:2024-10-01 20:14:04 +0000 UTC Type:0 Mac:52:54:00:d1:ac:15 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:functional-338309 Clientid:01:52:54:00:d1:ac:15}
I1001 19:17:07.401434   29594 main.go:141] libmachine: (functional-338309) DBG | domain functional-338309 has defined IP address 192.168.50.74 and MAC address 52:54:00:d1:ac:15 in network mk-functional-338309
I1001 19:17:07.401563   29594 main.go:141] libmachine: (functional-338309) Calling .GetSSHPort
I1001 19:17:07.401736   29594 main.go:141] libmachine: (functional-338309) Calling .GetSSHKeyPath
I1001 19:17:07.401890   29594 main.go:141] libmachine: (functional-338309) Calling .GetSSHUsername
I1001 19:17:07.402043   29594 sshutil.go:53] new ssh client: &{IP:192.168.50.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/functional-338309/id_rsa Username:docker}
I1001 19:17:07.499857   29594 ssh_runner.go:195] Run: sudo crictl images --output json
I1001 19:17:07.567580   29594 main.go:141] libmachine: Making call to close driver server
I1001 19:17:07.567595   29594 main.go:141] libmachine: (functional-338309) Calling .Close
I1001 19:17:07.567815   29594 main.go:141] libmachine: Successfully made call to close driver server
I1001 19:17:07.567832   29594 main.go:141] libmachine: Making call to close connection to plugin binary
I1001 19:17:07.567841   29594 main.go:141] libmachine: Making call to close driver server
I1001 19:17:07.567849   29594 main.go:141] libmachine: (functional-338309) Calling .Close
I1001 19:17:07.569356   29594 main.go:141] libmachine: (functional-338309) DBG | Closing plugin on server side
I1001 19:17:07.569373   29594 main.go:141] libmachine: Successfully made call to close driver server
I1001 19:17:07.569401   29594 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-338309 image ls --format yaml --alsologtostderr:
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: fe8054c597be5a2b2dc95d70c5f4289a861ae5db71222b01be0c8e997cf1ef79
repoDigests:
- localhost/minikube-local-cache-test@sha256:f2e86b37bd7e71c570bef2461b62f6b0cc24bf132e3a83a320d65b9ed978940a
repoTags:
- localhost/minikube-local-cache-test:functional-338309
size: "3330"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-338309
size: "4943877"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-338309 image ls --format yaml --alsologtostderr:
I1001 19:17:02.933494   29324 out.go:345] Setting OutFile to fd 1 ...
I1001 19:17:02.933789   29324 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 19:17:02.933801   29324 out.go:358] Setting ErrFile to fd 2...
I1001 19:17:02.933808   29324 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 19:17:02.934078   29324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
I1001 19:17:02.934942   29324 config.go:182] Loaded profile config "functional-338309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 19:17:02.935091   29324 config.go:182] Loaded profile config "functional-338309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 19:17:02.935662   29324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 19:17:02.935722   29324 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 19:17:02.951103   29324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46693
I1001 19:17:02.951625   29324 main.go:141] libmachine: () Calling .GetVersion
I1001 19:17:02.952403   29324 main.go:141] libmachine: Using API Version  1
I1001 19:17:02.952442   29324 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 19:17:02.952817   29324 main.go:141] libmachine: () Calling .GetMachineName
I1001 19:17:02.953015   29324 main.go:141] libmachine: (functional-338309) Calling .GetState
I1001 19:17:02.955541   29324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 19:17:02.955586   29324 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 19:17:02.971422   29324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36551
I1001 19:17:02.971899   29324 main.go:141] libmachine: () Calling .GetVersion
I1001 19:17:02.972386   29324 main.go:141] libmachine: Using API Version  1
I1001 19:17:02.972415   29324 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 19:17:02.972735   29324 main.go:141] libmachine: () Calling .GetMachineName
I1001 19:17:02.972956   29324 main.go:141] libmachine: (functional-338309) Calling .DriverName
I1001 19:17:02.973194   29324 ssh_runner.go:195] Run: systemctl --version
I1001 19:17:02.973234   29324 main.go:141] libmachine: (functional-338309) Calling .GetSSHHostname
I1001 19:17:02.977228   29324 main.go:141] libmachine: (functional-338309) DBG | domain functional-338309 has defined MAC address 52:54:00:d1:ac:15 in network mk-functional-338309
I1001 19:17:02.977633   29324 main.go:141] libmachine: (functional-338309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ac:15", ip: ""} in network mk-functional-338309: {Iface:virbr1 ExpiryTime:2024-10-01 20:14:04 +0000 UTC Type:0 Mac:52:54:00:d1:ac:15 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:functional-338309 Clientid:01:52:54:00:d1:ac:15}
I1001 19:17:02.977657   29324 main.go:141] libmachine: (functional-338309) DBG | domain functional-338309 has defined IP address 192.168.50.74 and MAC address 52:54:00:d1:ac:15 in network mk-functional-338309
I1001 19:17:02.977886   29324 main.go:141] libmachine: (functional-338309) Calling .GetSSHPort
I1001 19:17:02.978069   29324 main.go:141] libmachine: (functional-338309) Calling .GetSSHKeyPath
I1001 19:17:02.978195   29324 main.go:141] libmachine: (functional-338309) Calling .GetSSHUsername
I1001 19:17:02.978314   29324 sshutil.go:53] new ssh client: &{IP:192.168.50.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/functional-338309/id_rsa Username:docker}
I1001 19:17:03.058746   29324 ssh_runner.go:195] Run: sudo crictl images --output json
I1001 19:17:03.097968   29324 main.go:141] libmachine: Making call to close driver server
I1001 19:17:03.097986   29324 main.go:141] libmachine: (functional-338309) Calling .Close
I1001 19:17:03.098228   29324 main.go:141] libmachine: Successfully made call to close driver server
I1001 19:17:03.098244   29324 main.go:141] libmachine: Making call to close connection to plugin binary
I1001 19:17:03.098258   29324 main.go:141] libmachine: Making call to close driver server
I1001 19:17:03.098265   29324 main.go:141] libmachine: (functional-338309) Calling .Close
I1001 19:17:03.098461   29324 main.go:141] libmachine: Successfully made call to close driver server
I1001 19:17:03.098480   29324 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-338309 ssh pgrep buildkitd: exit status 1 (187.783221ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 image build -t localhost/my-image:functional-338309 testdata/build --alsologtostderr
E1001 19:17:04.157080   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-338309 image build -t localhost/my-image:functional-338309 testdata/build --alsologtostderr: (3.779964861s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-338309 image build -t localhost/my-image:functional-338309 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 247dcced9c6
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-338309
--> 302df390070
Successfully tagged localhost/my-image:functional-338309
302df390070b5614836d6523d262d067521fda81fc3c9f970a22bd7a7e91713b
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-338309 image build -t localhost/my-image:functional-338309 testdata/build --alsologtostderr:
I1001 19:17:03.333502   29378 out.go:345] Setting OutFile to fd 1 ...
I1001 19:17:03.333626   29378 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 19:17:03.333634   29378 out.go:358] Setting ErrFile to fd 2...
I1001 19:17:03.333639   29378 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 19:17:03.333830   29378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
I1001 19:17:03.334416   29378 config.go:182] Loaded profile config "functional-338309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 19:17:03.334895   29378 config.go:182] Loaded profile config "functional-338309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 19:17:03.335245   29378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 19:17:03.335280   29378 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 19:17:03.350469   29378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40949
I1001 19:17:03.350986   29378 main.go:141] libmachine: () Calling .GetVersion
I1001 19:17:03.351512   29378 main.go:141] libmachine: Using API Version  1
I1001 19:17:03.351527   29378 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 19:17:03.351919   29378 main.go:141] libmachine: () Calling .GetMachineName
I1001 19:17:03.352094   29378 main.go:141] libmachine: (functional-338309) Calling .GetState
I1001 19:17:03.354133   29378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 19:17:03.354191   29378 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 19:17:03.369842   29378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35305
I1001 19:17:03.370307   29378 main.go:141] libmachine: () Calling .GetVersion
I1001 19:17:03.370759   29378 main.go:141] libmachine: Using API Version  1
I1001 19:17:03.370783   29378 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 19:17:03.371129   29378 main.go:141] libmachine: () Calling .GetMachineName
I1001 19:17:03.371317   29378 main.go:141] libmachine: (functional-338309) Calling .DriverName
I1001 19:17:03.371526   29378 ssh_runner.go:195] Run: systemctl --version
I1001 19:17:03.371555   29378 main.go:141] libmachine: (functional-338309) Calling .GetSSHHostname
I1001 19:17:03.374386   29378 main.go:141] libmachine: (functional-338309) DBG | domain functional-338309 has defined MAC address 52:54:00:d1:ac:15 in network mk-functional-338309
I1001 19:17:03.374806   29378 main.go:141] libmachine: (functional-338309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ac:15", ip: ""} in network mk-functional-338309: {Iface:virbr1 ExpiryTime:2024-10-01 20:14:04 +0000 UTC Type:0 Mac:52:54:00:d1:ac:15 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:functional-338309 Clientid:01:52:54:00:d1:ac:15}
I1001 19:17:03.374844   29378 main.go:141] libmachine: (functional-338309) DBG | domain functional-338309 has defined IP address 192.168.50.74 and MAC address 52:54:00:d1:ac:15 in network mk-functional-338309
I1001 19:17:03.374960   29378 main.go:141] libmachine: (functional-338309) Calling .GetSSHPort
I1001 19:17:03.375137   29378 main.go:141] libmachine: (functional-338309) Calling .GetSSHKeyPath
I1001 19:17:03.375267   29378 main.go:141] libmachine: (functional-338309) Calling .GetSSHUsername
I1001 19:17:03.375430   29378 sshutil.go:53] new ssh client: &{IP:192.168.50.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/functional-338309/id_rsa Username:docker}
I1001 19:17:03.458349   29378 build_images.go:161] Building image from path: /tmp/build.3795083163.tar
I1001 19:17:03.458416   29378 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1001 19:17:03.475169   29378 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3795083163.tar
I1001 19:17:03.479421   29378 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3795083163.tar: stat -c "%s %y" /var/lib/minikube/build/build.3795083163.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3795083163.tar': No such file or directory
I1001 19:17:03.479465   29378 ssh_runner.go:362] scp /tmp/build.3795083163.tar --> /var/lib/minikube/build/build.3795083163.tar (3072 bytes)
I1001 19:17:03.514308   29378 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3795083163
I1001 19:17:03.525688   29378 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3795083163 -xf /var/lib/minikube/build/build.3795083163.tar
I1001 19:17:03.535258   29378 crio.go:315] Building image: /var/lib/minikube/build/build.3795083163
I1001 19:17:03.535336   29378 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-338309 /var/lib/minikube/build/build.3795083163 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1001 19:17:07.015466   29378 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-338309 /var/lib/minikube/build/build.3795083163 --cgroup-manager=cgroupfs: (3.480100472s)
I1001 19:17:07.015552   29378 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3795083163
I1001 19:17:07.049938   29378 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3795083163.tar
I1001 19:17:07.065392   29378 build_images.go:217] Built localhost/my-image:functional-338309 from /tmp/build.3795083163.tar
I1001 19:17:07.065436   29378 build_images.go:133] succeeded building to: functional-338309
I1001 19:17:07.065443   29378 build_images.go:134] failed building to: 
I1001 19:17:07.065464   29378 main.go:141] libmachine: Making call to close driver server
I1001 19:17:07.065479   29378 main.go:141] libmachine: (functional-338309) Calling .Close
I1001 19:17:07.065822   29378 main.go:141] libmachine: (functional-338309) DBG | Closing plugin on server side
I1001 19:17:07.065871   29378 main.go:141] libmachine: Successfully made call to close driver server
I1001 19:17:07.065879   29378 main.go:141] libmachine: Making call to close connection to plugin binary
I1001 19:17:07.065891   29378 main.go:141] libmachine: Making call to close driver server
I1001 19:17:07.065898   29378 main.go:141] libmachine: (functional-338309) Calling .Close
I1001 19:17:07.066109   29378 main.go:141] libmachine: Successfully made call to close driver server
I1001 19:17:07.066124   29378 main.go:141] libmachine: Making call to close connection to plugin binary
I1001 19:17:07.066137   29378 main.go:141] libmachine: (functional-338309) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.954414165s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-338309
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (23.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-338309 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-338309 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-pkvnh" [3585567d-c617-46a4-9d6f-a0b7bf099087] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-pkvnh" [3585567d-c617-46a4-9d6f-a0b7bf099087] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 23.004730248s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (23.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 image load --daemon kicbase/echo-server:functional-338309 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-338309 image load --daemon kicbase/echo-server:functional-338309 --alsologtostderr: (1.959652312s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 image load --daemon kicbase/echo-server:functional-338309 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-338309
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 image load --daemon kicbase/echo-server:functional-338309 --alsologtostderr
I1001 19:16:41.590790   18430 retry.go:31] will retry after 2.555800475s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:1fab0c57-07bb-4c3b-ad05-c2f541dffbec ResourceVersion:733 Generation:0 CreationTimestamp:2024-10-01 19:16:41 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0007016c0 VolumeMode:0xc000701700 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-338309 image load --daemon kicbase/echo-server:functional-338309 --alsologtostderr: (2.580830921s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 image save kicbase/echo-server:functional-338309 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-338309 image save kicbase/echo-server:functional-338309 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (4.012597871s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 image rm kicbase/echo-server:functional-338309 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-338309 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.036174057s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 image ls
functional_test.go:451: (dbg) Done: out/minikube-linux-amd64 -p functional-338309 image ls: (1.135722058s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-338309
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 image save --daemon kicbase/echo-server:functional-338309 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-338309
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "259.178773ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "43.891033ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
E1001 19:16:59.024789   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:16:59.031201   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:16:59.042709   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:16:59.064158   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1366: Took "260.950488ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "47.554782ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 service list -o json
E1001 19:16:59.105860   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1494: Took "440.112123ms" to run "out/minikube-linux-amd64 -p functional-338309 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-338309 /tmp/TestFunctionalparallelMountCmdany-port4271255090/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727810219122824320" to /tmp/TestFunctionalparallelMountCmdany-port4271255090/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727810219122824320" to /tmp/TestFunctionalparallelMountCmdany-port4271255090/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727810219122824320" to /tmp/TestFunctionalparallelMountCmdany-port4271255090/001/test-1727810219122824320
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh "findmnt -T /mount-9p | grep 9p"
E1001 19:16:59.187311   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-338309 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (191.934465ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 19:16:59.315086   18430 retry.go:31] will retry after 334.154385ms: exit status 1
E1001 19:16:59.348621   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh "findmnt -T /mount-9p | grep 9p"
E1001 19:16:59.670700   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  1 19:16 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  1 19:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  1 19:16 test-1727810219122824320
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh cat /mount-9p/test-1727810219122824320
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-338309 replace --force -f testdata/busybox-mount-test.yaml
E1001 19:17:00.312930   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c1d48946-f6c3-4c1f-995a-12daa3e79d50] Pending
helpers_test.go:344: "busybox-mount" [c1d48946-f6c3-4c1f-995a-12daa3e79d50] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E1001 19:17:01.594984   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [c1d48946-f6c3-4c1f-995a-12daa3e79d50] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c1d48946-f6c3-4c1f-995a-12daa3e79d50] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.007207278s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-338309 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-338309 /tmp/TestFunctionalparallelMountCmdany-port4271255090/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.50.74:30925
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.50.74:30925
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-338309 /tmp/TestFunctionalparallelMountCmdspecific-port2509273301/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-338309 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (256.805994ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 19:17:06.753969   18430 retry.go:31] will retry after 742.830568ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-338309 /tmp/TestFunctionalparallelMountCmdspecific-port2509273301/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-338309 ssh "sudo umount -f /mount-9p": exit status 1 (253.670683ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-338309 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-338309 /tmp/TestFunctionalparallelMountCmdspecific-port2509273301/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-338309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3015727586/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-338309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3015727586/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-338309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3015727586/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-338309 ssh "findmnt -T" /mount1: exit status 1 (278.937681ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 19:17:08.954166   18430 retry.go:31] will retry after 682.437886ms: exit status 1
E1001 19:17:09.278818   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-338309 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-338309 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-338309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3015727586/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-338309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3015727586/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-338309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3015727586/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
2024/10/01 19:17:13 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.71s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-338309
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-338309
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-338309
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (198.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-193737 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1001 19:21:34.840509   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:21:34.846969   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:21:34.858407   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:21:34.879805   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:21:34.921238   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:21:35.002726   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:21:35.164328   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:21:35.486076   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:21:36.128151   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:21:37.410404   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:21:39.971861   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:21:45.093810   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:21:55.336176   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:21:59.025339   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:22:15.817916   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:22:26.728552   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:22:56.780099   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-193737 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m17.702690718s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (198.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-193737 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-193737 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-193737 -- rollout status deployment/busybox: (4.611779579s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-193737 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-193737 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-193737 -- exec busybox-7dff88458-fz5bb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-193737 -- exec busybox-7dff88458-qzzzv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-193737 -- exec busybox-7dff88458-rbjkx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-193737 -- exec busybox-7dff88458-fz5bb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-193737 -- exec busybox-7dff88458-qzzzv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-193737 -- exec busybox-7dff88458-rbjkx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-193737 -- exec busybox-7dff88458-fz5bb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-193737 -- exec busybox-7dff88458-qzzzv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-193737 -- exec busybox-7dff88458-rbjkx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-193737 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-193737 -- exec busybox-7dff88458-fz5bb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-193737 -- exec busybox-7dff88458-fz5bb -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-193737 -- exec busybox-7dff88458-qzzzv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-193737 -- exec busybox-7dff88458-qzzzv -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-193737 -- exec busybox-7dff88458-rbjkx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-193737 -- exec busybox-7dff88458-rbjkx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-193737 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-193737 -v=7 --alsologtostderr: (58.450493151s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-193737 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 cp testdata/cp-test.txt ha-193737:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 cp ha-193737:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1122815483/001/cp-test_ha-193737.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 cp ha-193737:/home/docker/cp-test.txt ha-193737-m02:/home/docker/cp-test_ha-193737_ha-193737-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m02 "sudo cat /home/docker/cp-test_ha-193737_ha-193737-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 cp ha-193737:/home/docker/cp-test.txt ha-193737-m03:/home/docker/cp-test_ha-193737_ha-193737-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m03 "sudo cat /home/docker/cp-test_ha-193737_ha-193737-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 cp ha-193737:/home/docker/cp-test.txt ha-193737-m04:/home/docker/cp-test_ha-193737_ha-193737-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m04 "sudo cat /home/docker/cp-test_ha-193737_ha-193737-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 cp testdata/cp-test.txt ha-193737-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 cp ha-193737-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1122815483/001/cp-test_ha-193737-m02.txt
E1001 19:24:18.702236   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 cp ha-193737-m02:/home/docker/cp-test.txt ha-193737:/home/docker/cp-test_ha-193737-m02_ha-193737.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737 "sudo cat /home/docker/cp-test_ha-193737-m02_ha-193737.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 cp ha-193737-m02:/home/docker/cp-test.txt ha-193737-m03:/home/docker/cp-test_ha-193737-m02_ha-193737-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m03 "sudo cat /home/docker/cp-test_ha-193737-m02_ha-193737-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 cp ha-193737-m02:/home/docker/cp-test.txt ha-193737-m04:/home/docker/cp-test_ha-193737-m02_ha-193737-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m04 "sudo cat /home/docker/cp-test_ha-193737-m02_ha-193737-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 cp testdata/cp-test.txt ha-193737-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 cp ha-193737-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1122815483/001/cp-test_ha-193737-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 cp ha-193737-m03:/home/docker/cp-test.txt ha-193737:/home/docker/cp-test_ha-193737-m03_ha-193737.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737 "sudo cat /home/docker/cp-test_ha-193737-m03_ha-193737.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 cp ha-193737-m03:/home/docker/cp-test.txt ha-193737-m02:/home/docker/cp-test_ha-193737-m03_ha-193737-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m02 "sudo cat /home/docker/cp-test_ha-193737-m03_ha-193737-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 cp ha-193737-m03:/home/docker/cp-test.txt ha-193737-m04:/home/docker/cp-test_ha-193737-m03_ha-193737-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m04 "sudo cat /home/docker/cp-test_ha-193737-m03_ha-193737-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 cp testdata/cp-test.txt ha-193737-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1122815483/001/cp-test_ha-193737-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt ha-193737:/home/docker/cp-test_ha-193737-m04_ha-193737.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737 "sudo cat /home/docker/cp-test_ha-193737-m04_ha-193737.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt ha-193737-m02:/home/docker/cp-test_ha-193737-m04_ha-193737-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m02 "sudo cat /home/docker/cp-test_ha-193737-m04_ha-193737-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 cp ha-193737-m04:/home/docker/cp-test.txt ha-193737-m03:/home/docker/cp-test_ha-193737-m04_ha-193737-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 ssh -n ha-193737-m03 "sudo cat /home/docker/cp-test_ha-193737-m04_ha-193737-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 node delete m03 -v=7 --alsologtostderr
E1001 19:33:22.090268   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-193737 node delete m03 -v=7 --alsologtostderr: (16.004861072s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (256.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-193737 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1001 19:36:34.844715   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:36:59.025845   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:37:57.908930   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-193737 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m15.657739962s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (256.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-193737 --control-plane -v=7 --alsologtostderr
E1001 19:41:34.839767   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-193737 --control-plane -v=7 --alsologtostderr: (1m19.014539054s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-193737 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.52s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-286472 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1001 19:41:59.027952   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-286472 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (52.51661014s)
--- PASS: TestJSONOutput/start/Command (52.52s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-286472 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-286472 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.67s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-286472 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-286472 --output=json --user=testUser: (6.665376671s)
--- PASS: TestJSONOutput/stop/Command (6.67s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-161005 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-161005 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.053175ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"970dccb7-1963-4d95-8af7-66faabecd822","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-161005] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b632373a-5176-4ab5-8060-9daafa0a58c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19736"}}
	{"specversion":"1.0","id":"2513390e-a63f-438c-9ba2-e13d10520f94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ba651732-100d-4839-a861-2301f1bcac0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig"}}
	{"specversion":"1.0","id":"0ab6b7c1-fee7-4a7b-9aa3-8cddd7bab275","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube"}}
	{"specversion":"1.0","id":"613e23bf-2345-4009-8ce1-313d9544dc46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"effdf50b-0b14-4ebe-87b4-ac7233efae35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b1101c3d-0cee-4779-b183-89f8a0fb7c48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-161005" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-161005
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (84.16s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-887859 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-887859 --driver=kvm2  --container-runtime=crio: (38.267420368s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-901223 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-901223 --driver=kvm2  --container-runtime=crio: (43.063763275s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-887859
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-901223
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-901223" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-901223
helpers_test.go:175: Cleaning up "first-887859" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-887859
--- PASS: TestMinikubeProfile (84.16s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (30.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-454891 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-454891 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.574244347s)
--- PASS: TestMountStart/serial/StartWithMountFirst (30.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-454891 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-454891 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-467291 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-467291 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.096785379s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-467291 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-467291 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-454891 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-467291 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-467291 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-467291
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-467291: (1.277889405s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.5s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-467291
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-467291: (21.504127692s)
--- PASS: TestMountStart/serial/RestartStopped (22.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-467291 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-467291 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (107.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-325713 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1001 19:46:34.840589   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
E1001 19:46:59.025257   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-325713 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m46.974492567s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (107.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325713 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325713 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-325713 -- rollout status deployment/busybox: (4.461745017s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325713 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325713 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325713 -- exec busybox-7dff88458-b9lwm -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325713 -- exec busybox-7dff88458-nhjc5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325713 -- exec busybox-7dff88458-b9lwm -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325713 -- exec busybox-7dff88458-nhjc5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325713 -- exec busybox-7dff88458-b9lwm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325713 -- exec busybox-7dff88458-nhjc5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.90s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325713 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325713 -- exec busybox-7dff88458-b9lwm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325713 -- exec busybox-7dff88458-b9lwm -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325713 -- exec busybox-7dff88458-nhjc5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325713 -- exec busybox-7dff88458-nhjc5 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-325713 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-325713 -v 3 --alsologtostderr: (51.980615447s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (52.56s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-325713 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 cp testdata/cp-test.txt multinode-325713:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 ssh -n multinode-325713 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 cp multinode-325713:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile187864513/001/cp-test_multinode-325713.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 ssh -n multinode-325713 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 cp multinode-325713:/home/docker/cp-test.txt multinode-325713-m02:/home/docker/cp-test_multinode-325713_multinode-325713-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 ssh -n multinode-325713 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 ssh -n multinode-325713-m02 "sudo cat /home/docker/cp-test_multinode-325713_multinode-325713-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 cp multinode-325713:/home/docker/cp-test.txt multinode-325713-m03:/home/docker/cp-test_multinode-325713_multinode-325713-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 ssh -n multinode-325713 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 ssh -n multinode-325713-m03 "sudo cat /home/docker/cp-test_multinode-325713_multinode-325713-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 cp testdata/cp-test.txt multinode-325713-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 ssh -n multinode-325713-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 cp multinode-325713-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile187864513/001/cp-test_multinode-325713-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 ssh -n multinode-325713-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 cp multinode-325713-m02:/home/docker/cp-test.txt multinode-325713:/home/docker/cp-test_multinode-325713-m02_multinode-325713.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 ssh -n multinode-325713-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 ssh -n multinode-325713 "sudo cat /home/docker/cp-test_multinode-325713-m02_multinode-325713.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 cp multinode-325713-m02:/home/docker/cp-test.txt multinode-325713-m03:/home/docker/cp-test_multinode-325713-m02_multinode-325713-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 ssh -n multinode-325713-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 ssh -n multinode-325713-m03 "sudo cat /home/docker/cp-test_multinode-325713-m02_multinode-325713-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 cp testdata/cp-test.txt multinode-325713-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 ssh -n multinode-325713-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 cp multinode-325713-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile187864513/001/cp-test_multinode-325713-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 ssh -n multinode-325713-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 cp multinode-325713-m03:/home/docker/cp-test.txt multinode-325713:/home/docker/cp-test_multinode-325713-m03_multinode-325713.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 ssh -n multinode-325713-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 ssh -n multinode-325713 "sudo cat /home/docker/cp-test_multinode-325713-m03_multinode-325713.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 cp multinode-325713-m03:/home/docker/cp-test.txt multinode-325713-m02:/home/docker/cp-test_multinode-325713-m03_multinode-325713-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 ssh -n multinode-325713-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 ssh -n multinode-325713-m02 "sudo cat /home/docker/cp-test_multinode-325713-m03_multinode-325713-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-325713 node stop m03: (1.401591523s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-325713 status: exit status 7 (454.320281ms)

                                                
                                                
-- stdout --
	multinode-325713
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-325713-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-325713-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-325713 status --alsologtostderr: exit status 7 (436.113318ms)

                                                
                                                
-- stdout --
	multinode-325713
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-325713-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-325713-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 19:48:28.424217   48094 out.go:345] Setting OutFile to fd 1 ...
	I1001 19:48:28.424537   48094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:48:28.424547   48094 out.go:358] Setting ErrFile to fd 2...
	I1001 19:48:28.424551   48094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:48:28.424798   48094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 19:48:28.425091   48094 out.go:352] Setting JSON to false
	I1001 19:48:28.425119   48094 mustload.go:65] Loading cluster: multinode-325713
	I1001 19:48:28.425181   48094 notify.go:220] Checking for updates...
	I1001 19:48:28.425558   48094 config.go:182] Loaded profile config "multinode-325713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 19:48:28.425575   48094 status.go:174] checking status of multinode-325713 ...
	I1001 19:48:28.425997   48094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:48:28.426050   48094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:48:28.443057   48094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40045
	I1001 19:48:28.443577   48094 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:48:28.444290   48094 main.go:141] libmachine: Using API Version  1
	I1001 19:48:28.444311   48094 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:48:28.444737   48094 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:48:28.444996   48094 main.go:141] libmachine: (multinode-325713) Calling .GetState
	I1001 19:48:28.446746   48094 status.go:371] multinode-325713 host status = "Running" (err=<nil>)
	I1001 19:48:28.446769   48094 host.go:66] Checking if "multinode-325713" exists ...
	I1001 19:48:28.447185   48094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:48:28.447238   48094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:48:28.463464   48094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34921
	I1001 19:48:28.463984   48094 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:48:28.464527   48094 main.go:141] libmachine: Using API Version  1
	I1001 19:48:28.464563   48094 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:48:28.464935   48094 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:48:28.465125   48094 main.go:141] libmachine: (multinode-325713) Calling .GetIP
	I1001 19:48:28.468655   48094 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:48:28.469147   48094 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:48:28.469190   48094 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:48:28.469273   48094 host.go:66] Checking if "multinode-325713" exists ...
	I1001 19:48:28.469706   48094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:48:28.469744   48094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:48:28.485689   48094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34817
	I1001 19:48:28.486158   48094 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:48:28.486673   48094 main.go:141] libmachine: Using API Version  1
	I1001 19:48:28.486702   48094 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:48:28.487050   48094 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:48:28.487277   48094 main.go:141] libmachine: (multinode-325713) Calling .DriverName
	I1001 19:48:28.487458   48094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 19:48:28.487489   48094 main.go:141] libmachine: (multinode-325713) Calling .GetSSHHostname
	I1001 19:48:28.490525   48094 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:48:28.491000   48094 main.go:141] libmachine: (multinode-325713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:17:a5", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:45:45 +0000 UTC Type:0 Mac:52:54:00:df:17:a5 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-325713 Clientid:01:52:54:00:df:17:a5}
	I1001 19:48:28.491028   48094 main.go:141] libmachine: (multinode-325713) DBG | domain multinode-325713 has defined IP address 192.168.39.165 and MAC address 52:54:00:df:17:a5 in network mk-multinode-325713
	I1001 19:48:28.491258   48094 main.go:141] libmachine: (multinode-325713) Calling .GetSSHPort
	I1001 19:48:28.491445   48094 main.go:141] libmachine: (multinode-325713) Calling .GetSSHKeyPath
	I1001 19:48:28.491604   48094 main.go:141] libmachine: (multinode-325713) Calling .GetSSHUsername
	I1001 19:48:28.491723   48094 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/multinode-325713/id_rsa Username:docker}
	I1001 19:48:28.575605   48094 ssh_runner.go:195] Run: systemctl --version
	I1001 19:48:28.581950   48094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 19:48:28.597791   48094 kubeconfig.go:125] found "multinode-325713" server: "https://192.168.39.165:8443"
	I1001 19:48:28.597828   48094 api_server.go:166] Checking apiserver status ...
	I1001 19:48:28.597857   48094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 19:48:28.611978   48094 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1046/cgroup
	W1001 19:48:28.623685   48094 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1046/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1001 19:48:28.623760   48094 ssh_runner.go:195] Run: ls
	I1001 19:48:28.628910   48094 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I1001 19:48:28.634404   48094 api_server.go:279] https://192.168.39.165:8443/healthz returned 200:
	ok
	I1001 19:48:28.634432   48094 status.go:463] multinode-325713 apiserver status = Running (err=<nil>)
	I1001 19:48:28.634441   48094 status.go:176] multinode-325713 status: &{Name:multinode-325713 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 19:48:28.634457   48094 status.go:174] checking status of multinode-325713-m02 ...
	I1001 19:48:28.634788   48094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:48:28.634825   48094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:48:28.651963   48094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37263
	I1001 19:48:28.652501   48094 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:48:28.653044   48094 main.go:141] libmachine: Using API Version  1
	I1001 19:48:28.653060   48094 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:48:28.653382   48094 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:48:28.653598   48094 main.go:141] libmachine: (multinode-325713-m02) Calling .GetState
	I1001 19:48:28.655326   48094 status.go:371] multinode-325713-m02 host status = "Running" (err=<nil>)
	I1001 19:48:28.655342   48094 host.go:66] Checking if "multinode-325713-m02" exists ...
	I1001 19:48:28.655681   48094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:48:28.655724   48094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:48:28.672537   48094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44115
	I1001 19:48:28.673047   48094 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:48:28.673682   48094 main.go:141] libmachine: Using API Version  1
	I1001 19:48:28.673751   48094 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:48:28.674125   48094 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:48:28.674302   48094 main.go:141] libmachine: (multinode-325713-m02) Calling .GetIP
	I1001 19:48:28.677478   48094 main.go:141] libmachine: (multinode-325713-m02) DBG | domain multinode-325713-m02 has defined MAC address 52:54:00:da:7f:5a in network mk-multinode-325713
	I1001 19:48:28.678051   48094 main.go:141] libmachine: (multinode-325713-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7f:5a", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:46:43 +0000 UTC Type:0 Mac:52:54:00:da:7f:5a Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:multinode-325713-m02 Clientid:01:52:54:00:da:7f:5a}
	I1001 19:48:28.678077   48094 main.go:141] libmachine: (multinode-325713-m02) DBG | domain multinode-325713-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:da:7f:5a in network mk-multinode-325713
	I1001 19:48:28.678302   48094 host.go:66] Checking if "multinode-325713-m02" exists ...
	I1001 19:48:28.678617   48094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:48:28.678655   48094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:48:28.694567   48094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46237
	I1001 19:48:28.695035   48094 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:48:28.695589   48094 main.go:141] libmachine: Using API Version  1
	I1001 19:48:28.695611   48094 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:48:28.695954   48094 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:48:28.696151   48094 main.go:141] libmachine: (multinode-325713-m02) Calling .DriverName
	I1001 19:48:28.696340   48094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 19:48:28.696377   48094 main.go:141] libmachine: (multinode-325713-m02) Calling .GetSSHHostname
	I1001 19:48:28.699268   48094 main.go:141] libmachine: (multinode-325713-m02) DBG | domain multinode-325713-m02 has defined MAC address 52:54:00:da:7f:5a in network mk-multinode-325713
	I1001 19:48:28.699724   48094 main.go:141] libmachine: (multinode-325713-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7f:5a", ip: ""} in network mk-multinode-325713: {Iface:virbr1 ExpiryTime:2024-10-01 20:46:43 +0000 UTC Type:0 Mac:52:54:00:da:7f:5a Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:multinode-325713-m02 Clientid:01:52:54:00:da:7f:5a}
	I1001 19:48:28.699764   48094 main.go:141] libmachine: (multinode-325713-m02) DBG | domain multinode-325713-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:da:7f:5a in network mk-multinode-325713
	I1001 19:48:28.699837   48094 main.go:141] libmachine: (multinode-325713-m02) Calling .GetSSHPort
	I1001 19:48:28.699991   48094 main.go:141] libmachine: (multinode-325713-m02) Calling .GetSSHKeyPath
	I1001 19:48:28.700152   48094 main.go:141] libmachine: (multinode-325713-m02) Calling .GetSSHUsername
	I1001 19:48:28.700271   48094 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19736-11198/.minikube/machines/multinode-325713-m02/id_rsa Username:docker}
	I1001 19:48:28.783924   48094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 19:48:28.797376   48094 status.go:176] multinode-325713-m02 status: &{Name:multinode-325713-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1001 19:48:28.797416   48094 status.go:174] checking status of multinode-325713-m03 ...
	I1001 19:48:28.797787   48094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 19:48:28.797838   48094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 19:48:28.813401   48094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45505
	I1001 19:48:28.813976   48094 main.go:141] libmachine: () Calling .GetVersion
	I1001 19:48:28.814536   48094 main.go:141] libmachine: Using API Version  1
	I1001 19:48:28.814559   48094 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 19:48:28.815025   48094 main.go:141] libmachine: () Calling .GetMachineName
	I1001 19:48:28.815225   48094 main.go:141] libmachine: (multinode-325713-m03) Calling .GetState
	I1001 19:48:28.816806   48094 status.go:371] multinode-325713-m03 host status = "Stopped" (err=<nil>)
	I1001 19:48:28.816827   48094 status.go:384] host is not running, skipping remaining checks
	I1001 19:48:28.816850   48094 status.go:176] multinode-325713-m03 status: &{Name:multinode-325713-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-325713 node start m03 -v=7 --alsologtostderr: (39.381004293s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 node delete m03
E1001 19:54:37.910270   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-325713 node delete m03: (1.580504758s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (178.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-325713 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-325713 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m58.444517314s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325713 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (178.99s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-325713
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-325713-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-325713-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (61.310155ms)

                                                
                                                
-- stdout --
	* [multinode-325713-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-325713-m02' is duplicated with machine name 'multinode-325713-m02' in profile 'multinode-325713'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-325713-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-325713-m03 --driver=kvm2  --container-runtime=crio: (41.077511933s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-325713
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-325713: exit status 80 (209.308356ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-325713 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-325713-m03 already exists in multinode-325713-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-325713-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.19s)

                                                
                                    
x
+
TestScheduledStopUnix (114.75s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-142421 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-142421 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.175433948s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-142421 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-142421 -n scheduled-stop-142421
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-142421 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1001 20:04:31.962133   18430 retry.go:31] will retry after 149.821µs: open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/scheduled-stop-142421/pid: no such file or directory
I1001 20:04:31.963310   18430 retry.go:31] will retry after 197.757µs: open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/scheduled-stop-142421/pid: no such file or directory
I1001 20:04:31.964418   18430 retry.go:31] will retry after 282.431µs: open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/scheduled-stop-142421/pid: no such file or directory
I1001 20:04:31.965549   18430 retry.go:31] will retry after 334.092µs: open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/scheduled-stop-142421/pid: no such file or directory
I1001 20:04:31.966698   18430 retry.go:31] will retry after 478.792µs: open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/scheduled-stop-142421/pid: no such file or directory
I1001 20:04:31.967825   18430 retry.go:31] will retry after 879.434µs: open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/scheduled-stop-142421/pid: no such file or directory
I1001 20:04:31.968999   18430 retry.go:31] will retry after 732.631µs: open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/scheduled-stop-142421/pid: no such file or directory
I1001 20:04:31.970168   18430 retry.go:31] will retry after 1.458455ms: open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/scheduled-stop-142421/pid: no such file or directory
I1001 20:04:31.972467   18430 retry.go:31] will retry after 1.647867ms: open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/scheduled-stop-142421/pid: no such file or directory
I1001 20:04:31.974717   18430 retry.go:31] will retry after 5.405175ms: open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/scheduled-stop-142421/pid: no such file or directory
I1001 20:04:31.980952   18430 retry.go:31] will retry after 8.497366ms: open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/scheduled-stop-142421/pid: no such file or directory
I1001 20:04:31.990210   18430 retry.go:31] will retry after 5.812447ms: open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/scheduled-stop-142421/pid: no such file or directory
I1001 20:04:31.996438   18430 retry.go:31] will retry after 6.81413ms: open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/scheduled-stop-142421/pid: no such file or directory
I1001 20:04:32.003678   18430 retry.go:31] will retry after 25.801881ms: open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/scheduled-stop-142421/pid: no such file or directory
I1001 20:04:32.029957   18430 retry.go:31] will retry after 32.107325ms: open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/scheduled-stop-142421/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-142421 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-142421 -n scheduled-stop-142421
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-142421
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-142421 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-142421
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-142421: exit status 7 (72.683719ms)

                                                
                                                
-- stdout --
	scheduled-stop-142421
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-142421 -n scheduled-stop-142421
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-142421 -n scheduled-stop-142421: exit status 7 (63.545585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-142421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-142421
--- PASS: TestScheduledStopUnix (114.75s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (207.72s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.485429110 start -p running-upgrade-819936 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E1001 20:06:34.840529   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:06:42.096073   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:06:59.024900   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.485429110 start -p running-upgrade-819936 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m2.884278732s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-819936 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-819936 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m21.523870331s)
helpers_test.go:175: Cleaning up "running-upgrade-819936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-819936
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-819936: (1.042693148s)
--- PASS: TestRunningBinaryUpgrade (207.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-791490 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-791490 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (82.774841ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-791490] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestPause/serial/Start (81.18s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-170137 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-170137 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m21.177114595s)
--- PASS: TestPause/serial/Start (81.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-791490 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-791490 --driver=kvm2  --container-runtime=crio: (1m35.744397462s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-791490 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (41.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-791490 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-791490 --no-kubernetes --driver=kvm2  --container-runtime=crio: (40.086747485s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-791490 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-791490 status -o json: exit status 2 (278.238731ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-791490","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-791490
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-791490: (1.094593019s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (41.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-791490 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-791490 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.894743147s)
--- PASS: TestNoKubernetes/serial/Start (27.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-983557 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-983557 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (142.429855ms)

                                                
                                                
-- stdout --
	* [false-983557] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 20:08:08.774628   57857 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:08:08.774886   57857 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:08:08.774917   57857 out.go:358] Setting ErrFile to fd 2...
	I1001 20:08:08.774932   57857 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:08:08.775288   57857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-11198/.minikube/bin
	I1001 20:08:08.775945   57857 out.go:352] Setting JSON to false
	I1001 20:08:08.777151   57857 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6631,"bootTime":1727806658,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 20:08:08.777277   57857 start.go:139] virtualization: kvm guest
	I1001 20:08:08.779819   57857 out.go:177] * [false-983557] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 20:08:08.781093   57857 notify.go:220] Checking for updates...
	I1001 20:08:08.781163   57857 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 20:08:08.782610   57857 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 20:08:08.783851   57857 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-11198/kubeconfig
	I1001 20:08:08.785031   57857 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-11198/.minikube
	I1001 20:08:08.786122   57857 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 20:08:08.787252   57857 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 20:08:08.788905   57857 config.go:182] Loaded profile config "NoKubernetes-791490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1001 20:08:08.789032   57857 config.go:182] Loaded profile config "force-systemd-env-528861": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 20:08:08.789128   57857 config.go:182] Loaded profile config "running-upgrade-819936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1001 20:08:08.789208   57857 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 20:08:08.841456   57857 out.go:177] * Using the kvm2 driver based on user configuration
	I1001 20:08:08.842651   57857 start.go:297] selected driver: kvm2
	I1001 20:08:08.842697   57857 start.go:901] validating driver "kvm2" against <nil>
	I1001 20:08:08.842727   57857 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 20:08:08.844610   57857 out.go:201] 
	W1001 20:08:08.845794   57857 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1001 20:08:08.846852   57857 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-983557 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-983557

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-983557

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-983557

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-983557

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-983557

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-983557

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-983557

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-983557

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-983557

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-983557

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-983557

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-983557" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-983557" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-983557

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983557"

                                                
                                                
----------------------- debugLogs end: false-983557 [took: 5.423785398s] --------------------------------
helpers_test.go:175: Cleaning up "false-983557" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-983557
--- PASS: TestNetworkPlugins/group/false (5.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-791490 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-791490 "sudo systemctl is-active --quiet service kubelet": exit status 1 (193.627165ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (28.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.550799705s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (13.602963451s)
--- PASS: TestNoKubernetes/serial/ProfileList (28.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-791490
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-791490: (1.351718014s)
--- PASS: TestNoKubernetes/serial/Stop (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (35.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-791490 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-791490 --driver=kvm2  --container-runtime=crio: (35.959425052s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (35.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-791490 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-791490 "sudo systemctl is-active --quiet service kubelet": exit status 1 (273.754091ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (138.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1732906085 start -p stopped-upgrade-042095 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1732906085 start -p stopped-upgrade-042095 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m35.980529004s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1732906085 -p stopped-upgrade-042095 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1732906085 -p stopped-upgrade-042095 stop: (2.157481878s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-042095 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1001 20:11:17.912392   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:11:34.840413   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-042095 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.404860538s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (138.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-042095
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-262337 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1001 20:11:59.025352   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-262337 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m9.544374637s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (52.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-106982 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-106982 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (52.903424263s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (52.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-262337 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [815f5080-dfac-4639-8d4d-799975d8f0e1] Pending
helpers_test.go:344: "busybox" [815f5080-dfac-4639-8d4d-799975d8f0e1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [815f5080-dfac-4639-8d4d-799975d8f0e1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.003553458s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-262337 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-262337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-262337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.016349912s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-262337 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-106982 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ba5e8cf4-b8c8-41e6-a3e2-d4c2914a88a8] Pending
helpers_test.go:344: "busybox" [ba5e8cf4-b8c8-41e6-a3e2-d4c2914a88a8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ba5e8cf4-b8c8-41e6-a3e2-d4c2914a88a8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004456322s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-106982 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-106982 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-106982 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (655.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-262337 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-262337 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (10m55.559855833s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-262337 -n no-preload-262337
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (655.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (338.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-878552 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-878552 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (5m38.459507982s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (338.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (605.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-106982 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1001 20:16:34.839739   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:16:59.025695   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-106982 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (10m5.724084934s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-106982 -n embed-certs-106982
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (605.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (6.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-359369 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-359369 --alsologtostderr -v=3: (6.304026063s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (6.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-359369 -n old-k8s-version-359369
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-359369 -n old-k8s-version-359369: exit status 7 (63.181464ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-359369 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-878552 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [201b5cff-932f-4ccf-b227-66a1705b1236] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1001 20:21:59.024409   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/addons-800266/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [201b5cff-932f-4ccf-b227-66a1705b1236] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004550927s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-878552 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-878552 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-878552 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (618.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-878552 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-878552 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (10m18.251853611s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-878552 -n default-k8s-diff-port-878552
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (618.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-204654 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-204654 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (45.138273011s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-204654 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-204654 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.092024606s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-204654 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-204654 --alsologtostderr -v=3: (10.372035796s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-204654 -n newest-cni-204654
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-204654 -n newest-cni-204654: exit status 7 (67.535591ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-204654 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-204654 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1001 20:41:34.839953   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-204654 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (36.268282829s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-204654 -n newest-cni-204654
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-204654 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-204654 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-204654 -n newest-cni-204654
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-204654 -n newest-cni-204654: exit status 2 (245.465792ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-204654 -n newest-cni-204654
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-204654 -n newest-cni-204654: exit status 2 (243.826104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-204654 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-204654 -n newest-cni-204654
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-204654 -n newest-cni-204654
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (56.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-983557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-983557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (56.837196375s)
--- PASS: TestNetworkPlugins/group/auto/Start (56.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (78.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-983557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-983557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m18.466325185s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (78.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-983557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-983557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m17.227010722s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-983557 "pgrep -a kubelet"
I1001 20:42:54.154714   18430 config.go:182] Loaded profile config "auto-983557": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-983557 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nmcs8" [664f300e-4336-4dcb-bad8-b156c589a90c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nmcs8" [664f300e-4336-4dcb-bad8-b156c589a90c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.009167984s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-983557 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-983557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-983557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-td25s" [5d0c7484-dd1f-4141-8be0-25ef9d65f5f2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005081053s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (75.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-983557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-983557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m15.875548994s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (75.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-983557 "pgrep -a kubelet"
I1001 20:43:25.736846   18430 config.go:182] Loaded profile config "kindnet-983557": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-983557 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dww8k" [e287a3f2-c496-49ea-90d1-0657c9b4baef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1001 20:43:28.062795   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/no-preload-262337/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-dww8k" [e287a3f2-c496-49ea-90d1-0657c9b4baef] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.00380043s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-983557 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-983557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-983557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (92.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-983557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-983557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m32.400760509s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (92.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-6z5d9" [620c32e2-6cb5-40a5-84d1-b1b427110416] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004661542s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-983557 "pgrep -a kubelet"
I1001 20:44:16.705901   18430 config.go:182] Loaded profile config "calico-983557": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-983557 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jnst6" [30b2e704-953b-4443-8c8c-a7d850c7eeea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jnst6" [30b2e704-953b-4443-8c8c-a7d850c7eeea] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004458955s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-983557 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-983557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-983557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-983557 "pgrep -a kubelet"
I1001 20:44:37.357522   18430 config.go:182] Loaded profile config "custom-flannel-983557": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-983557 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jhtc9" [cd639ad7-3546-4e1e-9d8e-e9c6e8fd1170] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1001 20:44:37.915991   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/functional-338309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-jhtc9" [cd639ad7-3546-4e1e-9d8e-e9c6e8fd1170] Running
E1001 20:44:43.077767   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:44:43.084225   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:44:43.095661   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:44:43.117050   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:44:43.158500   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:44:43.240071   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:44:43.402033   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:44:43.723927   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003404348s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (74.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-983557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1001 20:44:45.647562   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:44:48.209902   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-983557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m14.910579799s)
--- PASS: TestNetworkPlugins/group/flannel/Start (74.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-983557 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-983557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-983557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (96.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-983557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1001 20:45:24.054653   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-983557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m36.65385141s)
--- PASS: TestNetworkPlugins/group/bridge/Start (96.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-983557 "pgrep -a kubelet"
I1001 20:45:29.526446   18430 config.go:182] Loaded profile config "enable-default-cni-983557": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-983557 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-q94kv" [c47fd837-cb07-41f7-af10-1ff5f090be75] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-q94kv" [c47fd837-cb07-41f7-af10-1ff5f090be75] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.00457921s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-983557 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-983557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-983557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-5sxfq" [4dd6081a-e77f-4c54-ba89-15383bf5cb95] Running
E1001 20:46:05.016568   18430 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/old-k8s-version-359369/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004699164s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-983557 "pgrep -a kubelet"
I1001 20:46:06.576475   18430 config.go:182] Loaded profile config "flannel-983557": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-983557 replace --force -f testdata/netcat-deployment.yaml
I1001 20:46:06.821997   18430 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8kx99" [b0a1530e-4e68-4dc8-9ef2-63b6e2b46539] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8kx99" [b0a1530e-4e68-4dc8-9ef2-63b6e2b46539] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004378991s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-983557 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-983557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-983557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-983557 "pgrep -a kubelet"
I1001 20:46:41.868257   18430 config.go:182] Loaded profile config "bridge-983557": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-983557 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ddk8l" [9e941365-dfa6-4d22-8828-da8dad13aa8c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ddk8l" [9e941365-dfa6-4d22-8828-da8dad13aa8c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.005188528s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-983557 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-983557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-983557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (37/311)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.1/cached-images 0
15 TestDownloadOnly/v1.31.1/binaries 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
37 TestAddons/parallel/Olm 0
47 TestDockerFlags 0
50 TestDockerEnvContainerd 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/DockerEnv 0
105 TestFunctional/parallel/PodmanEnv 0
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
153 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
256 TestStartStop/group/disable-driver-mounts 0.15
266 TestNetworkPlugins/group/kubenet 3.78
274 TestNetworkPlugins/group/cilium 3.89
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:783: skipping: crio not supported
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-800266 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-556200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-556200
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-983557 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-983557

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-983557

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-983557

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-983557

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-983557

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-983557

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-983557

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-983557

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-983557

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-983557

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-983557

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-983557" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-983557" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19736-11198/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 01 Oct 2024 20:08:06 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.39.66:8443
name: force-systemd-env-528861
contexts:
- context:
cluster: force-systemd-env-528861
extensions:
- extension:
last-update: Tue, 01 Oct 2024 20:08:06 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: force-systemd-env-528861
name: force-systemd-env-528861
current-context: force-systemd-env-528861
kind: Config
preferences: {}
users:
- name: force-systemd-env-528861
user:
client-certificate: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/client.crt
client-key: /home/jenkins/minikube-integration/19736-11198/.minikube/profiles/force-systemd-env-528861/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-983557

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983557"

                                                
                                                
----------------------- debugLogs end: kubenet-983557 [took: 3.585918901s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-983557" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-983557
--- SKIP: TestNetworkPlugins/group/kubenet (3.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-983557 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-983557

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-983557

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-983557

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-983557

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-983557

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-983557

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-983557

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-983557

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-983557

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-983557

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-983557

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-983557" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-983557

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-983557

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-983557

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-983557

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-983557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-983557" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-983557

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-983557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983557"

                                                
                                                
----------------------- debugLogs end: cilium-983557 [took: 3.743377396s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-983557" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-983557
--- SKIP: TestNetworkPlugins/group/cilium (3.89s)

                                                
                                    
Copied to clipboard